2023-07-16 18:15:16,073 DEBUG [main] hbase.HBaseTestingUtility(342): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1ac4d196-f4be-3944-d331-ea30eb493ba6 2023-07-16 18:15:16,091 INFO [main] hbase.HBaseClassTestRule(94): Test class org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1 timeout: 13 mins 2023-07-16 18:15:16,115 INFO [Time-limited test] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=3, rsPorts=, rsClass=null, numDataNodes=3, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-07-16 18:15:16,116 INFO [Time-limited test] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1ac4d196-f4be-3944-d331-ea30eb493ba6/cluster_87eeeeb5-5880-a92c-ac20-b2d9553ef3c2, deleteOnExit=true 2023-07-16 18:15:16,116 INFO [Time-limited test] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-07-16 18:15:16,117 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1ac4d196-f4be-3944-d331-ea30eb493ba6/test.cache.data in system properties and HBase conf 2023-07-16 18:15:16,117 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1ac4d196-f4be-3944-d331-ea30eb493ba6/hadoop.tmp.dir in system properties and HBase conf 2023-07-16 18:15:16,117 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1ac4d196-f4be-3944-d331-ea30eb493ba6/hadoop.log.dir in system properties and HBase conf 2023-07-16 18:15:16,118 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1ac4d196-f4be-3944-d331-ea30eb493ba6/mapreduce.cluster.local.dir in system properties and HBase conf 2023-07-16 18:15:16,119 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1ac4d196-f4be-3944-d331-ea30eb493ba6/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-07-16 18:15:16,119 INFO [Time-limited test] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-07-16 18:15:16,233 WARN [Time-limited test] util.NativeCodeLoader(62): Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 2023-07-16 18:15:16,615 DEBUG [Time-limited test] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-07-16 18:15:16,620 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1ac4d196-f4be-3944-d331-ea30eb493ba6/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-07-16 18:15:16,621 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1ac4d196-f4be-3944-d331-ea30eb493ba6/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-07-16 18:15:16,621 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1ac4d196-f4be-3944-d331-ea30eb493ba6/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-07-16 18:15:16,621 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1ac4d196-f4be-3944-d331-ea30eb493ba6/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-16 18:15:16,622 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1ac4d196-f4be-3944-d331-ea30eb493ba6/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-07-16 18:15:16,622 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1ac4d196-f4be-3944-d331-ea30eb493ba6/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-07-16 18:15:16,622 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1ac4d196-f4be-3944-d331-ea30eb493ba6/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-16 18:15:16,623 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1ac4d196-f4be-3944-d331-ea30eb493ba6/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-16 18:15:16,623 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1ac4d196-f4be-3944-d331-ea30eb493ba6/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-07-16 18:15:16,624 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1ac4d196-f4be-3944-d331-ea30eb493ba6/nfs.dump.dir in system properties and HBase conf 2023-07-16 18:15:16,624 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1ac4d196-f4be-3944-d331-ea30eb493ba6/java.io.tmpdir in system properties and HBase conf 2023-07-16 18:15:16,624 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1ac4d196-f4be-3944-d331-ea30eb493ba6/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-16 18:15:16,625 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1ac4d196-f4be-3944-d331-ea30eb493ba6/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-07-16 18:15:16,625 INFO [Time-limited test] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1ac4d196-f4be-3944-d331-ea30eb493ba6/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-07-16 18:15:17,106 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-16 18:15:17,111 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-16 18:15:17,388 WARN [Time-limited test] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-namenode.properties,hadoop-metrics2.properties 2023-07-16 18:15:17,550 INFO [Time-limited test] log.Slf4jLog(67): Logging to org.slf4j.impl.Reload4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog 2023-07-16 18:15:17,573 WARN [Time-limited test] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-16 18:15:17,614 INFO [Time-limited test] log.Slf4jLog(67): jetty-6.1.26 2023-07-16 18:15:17,651 INFO [Time-limited test] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1ac4d196-f4be-3944-d331-ea30eb493ba6/java.io.tmpdir/Jetty_localhost_37271_hdfs____nawdoy/webapp 2023-07-16 18:15:17,816 INFO [Time-limited test] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:37271 2023-07-16 18:15:17,828 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-16 18:15:17,829 WARN [Time-limited test] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-16 18:15:18,384 WARN [Listener at localhost/36523] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-16 18:15:18,463 WARN [Listener at localhost/36523] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-16 18:15:18,480 WARN [Listener at localhost/36523] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-16 18:15:18,486 INFO [Listener at localhost/36523] log.Slf4jLog(67): jetty-6.1.26 2023-07-16 18:15:18,490 INFO [Listener at localhost/36523] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1ac4d196-f4be-3944-d331-ea30eb493ba6/java.io.tmpdir/Jetty_localhost_37619_datanode____t7rx7l/webapp 2023-07-16 18:15:18,604 INFO [Listener at localhost/36523] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:37619 2023-07-16 18:15:19,110 WARN [Listener at localhost/38683] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-16 18:15:19,157 WARN [Listener at localhost/38683] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-16 18:15:19,162 WARN [Listener at localhost/38683] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-16 18:15:19,165 INFO [Listener at localhost/38683] log.Slf4jLog(67): jetty-6.1.26 2023-07-16 18:15:19,171 INFO [Listener at localhost/38683] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1ac4d196-f4be-3944-d331-ea30eb493ba6/java.io.tmpdir/Jetty_localhost_40577_datanode____.pq2eo0/webapp 2023-07-16 18:15:19,301 INFO [Listener at localhost/38683] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:40577 2023-07-16 18:15:19,314 WARN [Listener at localhost/44159] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-16 18:15:19,397 WARN [Listener at localhost/44159] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-16 18:15:19,406 WARN [Listener at localhost/44159] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-16 18:15:19,409 INFO [Listener at localhost/44159] log.Slf4jLog(67): jetty-6.1.26 2023-07-16 18:15:19,425 INFO [Listener at localhost/44159] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1ac4d196-f4be-3944-d331-ea30eb493ba6/java.io.tmpdir/Jetty_localhost_33479_datanode____xzmnx9/webapp 2023-07-16 18:15:19,637 INFO [Listener at localhost/44159] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:33479 2023-07-16 18:15:19,707 WARN [Listener at localhost/38073] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-16 18:15:19,913 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xd06c65db20d43442: Processing first storage report for DS-26280529-f724-4ea2-95a9-ef1b4940f60b from datanode fcd6476b-4f3d-4459-9937-6defe535eaf1 2023-07-16 18:15:19,915 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xd06c65db20d43442: from storage DS-26280529-f724-4ea2-95a9-ef1b4940f60b node DatanodeRegistration(127.0.0.1:42677, datanodeUuid=fcd6476b-4f3d-4459-9937-6defe535eaf1, infoPort=43729, infoSecurePort=0, ipcPort=44159, storageInfo=lv=-57;cid=testClusterID;nsid=24449609;c=1689531317183), blocks: 0, hasStaleStorage: true, processing time: 2 msecs, invalidatedBlocks: 0 2023-07-16 18:15:19,915 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x2df6ebff473a435: Processing first storage report for DS-6ab893e3-d584-4af5-92b1-09319086e884 from datanode fdd5abaa-ab30-459a-874d-e3e11aad81f8 2023-07-16 18:15:19,916 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x2df6ebff473a435: from storage DS-6ab893e3-d584-4af5-92b1-09319086e884 node DatanodeRegistration(127.0.0.1:36769, datanodeUuid=fdd5abaa-ab30-459a-874d-e3e11aad81f8, infoPort=35943, infoSecurePort=0, ipcPort=38683, storageInfo=lv=-57;cid=testClusterID;nsid=24449609;c=1689531317183), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-16 18:15:19,916 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xd06c65db20d43442: Processing first storage report for DS-abd05c01-c22f-446a-a1ae-ed24b5db4d87 from datanode fcd6476b-4f3d-4459-9937-6defe535eaf1 2023-07-16 18:15:19,916 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xd06c65db20d43442: from storage DS-abd05c01-c22f-446a-a1ae-ed24b5db4d87 node DatanodeRegistration(127.0.0.1:42677, datanodeUuid=fcd6476b-4f3d-4459-9937-6defe535eaf1, infoPort=43729, infoSecurePort=0, ipcPort=44159, storageInfo=lv=-57;cid=testClusterID;nsid=24449609;c=1689531317183), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-16 18:15:19,916 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x2df6ebff473a435: Processing first storage report for DS-36aa450e-f352-4bc3-b36e-740548fe642f from datanode fdd5abaa-ab30-459a-874d-e3e11aad81f8 2023-07-16 18:15:19,916 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x2df6ebff473a435: from storage DS-36aa450e-f352-4bc3-b36e-740548fe642f node DatanodeRegistration(127.0.0.1:36769, datanodeUuid=fdd5abaa-ab30-459a-874d-e3e11aad81f8, infoPort=35943, infoSecurePort=0, ipcPort=38683, storageInfo=lv=-57;cid=testClusterID;nsid=24449609;c=1689531317183), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-16 18:15:19,951 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x36a18ff7b8af21d2: Processing first storage report for DS-918fb536-8f98-483c-9977-d31c79d2a9f3 from datanode 0f72b657-df25-4af1-8609-e86c97193b0a 2023-07-16 18:15:19,951 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x36a18ff7b8af21d2: from storage DS-918fb536-8f98-483c-9977-d31c79d2a9f3 node DatanodeRegistration(127.0.0.1:37737, datanodeUuid=0f72b657-df25-4af1-8609-e86c97193b0a, infoPort=39961, infoSecurePort=0, ipcPort=38073, storageInfo=lv=-57;cid=testClusterID;nsid=24449609;c=1689531317183), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-16 18:15:19,952 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x36a18ff7b8af21d2: Processing first storage report for DS-bec1276d-2d58-4f29-b8c4-96658cdabe48 from datanode 0f72b657-df25-4af1-8609-e86c97193b0a 2023-07-16 18:15:19,952 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x36a18ff7b8af21d2: from storage DS-bec1276d-2d58-4f29-b8c4-96658cdabe48 node DatanodeRegistration(127.0.0.1:37737, datanodeUuid=0f72b657-df25-4af1-8609-e86c97193b0a, infoPort=39961, infoSecurePort=0, ipcPort=38073, storageInfo=lv=-57;cid=testClusterID;nsid=24449609;c=1689531317183), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-16 18:15:20,193 DEBUG [Listener at localhost/38073] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1ac4d196-f4be-3944-d331-ea30eb493ba6 2023-07-16 18:15:20,314 INFO [Listener at localhost/38073] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1ac4d196-f4be-3944-d331-ea30eb493ba6/cluster_87eeeeb5-5880-a92c-ac20-b2d9553ef3c2/zookeeper_0, clientPort=53498, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1ac4d196-f4be-3944-d331-ea30eb493ba6/cluster_87eeeeb5-5880-a92c-ac20-b2d9553ef3c2/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1ac4d196-f4be-3944-d331-ea30eb493ba6/cluster_87eeeeb5-5880-a92c-ac20-b2d9553ef3c2/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-07-16 18:15:20,340 INFO [Listener at localhost/38073] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=53498 2023-07-16 18:15:20,352 INFO [Listener at localhost/38073] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-16 18:15:20,354 INFO [Listener at localhost/38073] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-16 18:15:21,012 INFO [Listener at localhost/38073] util.FSUtils(471): Created version file at hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92 with version=8 2023-07-16 18:15:21,013 INFO [Listener at localhost/38073] hbase.HBaseTestingUtility(1406): Setting hbase.fs.tmp.dir to hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/hbase-staging 2023-07-16 18:15:21,022 DEBUG [Listener at localhost/38073] hbase.LocalHBaseCluster(134): Setting Master Port to random. 2023-07-16 18:15:21,022 DEBUG [Listener at localhost/38073] hbase.LocalHBaseCluster(141): Setting RegionServer Port to random. 2023-07-16 18:15:21,022 DEBUG [Listener at localhost/38073] hbase.LocalHBaseCluster(151): Setting RS InfoServer Port to random. 2023-07-16 18:15:21,022 DEBUG [Listener at localhost/38073] hbase.LocalHBaseCluster(159): Setting Master InfoServer Port to random. 2023-07-16 18:15:21,412 INFO [Listener at localhost/38073] metrics.MetricRegistriesLoader(60): Loaded MetricRegistries class org.apache.hadoop.hbase.metrics.impl.MetricRegistriesImpl 2023-07-16 18:15:21,974 INFO [Listener at localhost/38073] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-07-16 18:15:22,013 INFO [Listener at localhost/38073] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-16 18:15:22,014 INFO [Listener at localhost/38073] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-16 18:15:22,014 INFO [Listener at localhost/38073] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-16 18:15:22,014 INFO [Listener at localhost/38073] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-16 18:15:22,014 INFO [Listener at localhost/38073] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-16 18:15:22,174 INFO [Listener at localhost/38073] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-16 18:15:22,254 DEBUG [Listener at localhost/38073] util.ClassSize(228): Using Unsafe to estimate memory layout 2023-07-16 18:15:22,360 INFO [Listener at localhost/38073] ipc.NettyRpcServer(120): Bind to /172.31.14.131:45445 2023-07-16 18:15:22,372 INFO [Listener at localhost/38073] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-16 18:15:22,374 INFO [Listener at localhost/38073] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-16 18:15:22,396 INFO [Listener at localhost/38073] zookeeper.RecoverableZooKeeper(93): Process identifier=master:45445 connecting to ZooKeeper ensemble=127.0.0.1:53498 2023-07-16 18:15:22,442 DEBUG [Listener at localhost/38073-EventThread] zookeeper.ZKWatcher(600): master:454450x0, quorum=127.0.0.1:53498, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-16 18:15:22,445 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:45445-0x1016f588ace0000 connected 2023-07-16 18:15:22,478 DEBUG [Listener at localhost/38073] zookeeper.ZKUtil(164): master:45445-0x1016f588ace0000, quorum=127.0.0.1:53498, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-16 18:15:22,480 DEBUG [Listener at localhost/38073] zookeeper.ZKUtil(164): master:45445-0x1016f588ace0000, quorum=127.0.0.1:53498, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-16 18:15:22,483 DEBUG [Listener at localhost/38073] zookeeper.ZKUtil(164): master:45445-0x1016f588ace0000, quorum=127.0.0.1:53498, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-16 18:15:22,494 DEBUG [Listener at localhost/38073] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=45445 2023-07-16 18:15:22,495 DEBUG [Listener at localhost/38073] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=45445 2023-07-16 18:15:22,496 DEBUG [Listener at localhost/38073] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=45445 2023-07-16 18:15:22,504 DEBUG [Listener at localhost/38073] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=45445 2023-07-16 18:15:22,505 DEBUG [Listener at localhost/38073] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=45445 2023-07-16 18:15:22,539 INFO [Listener at localhost/38073] log.Log(170): Logging initialized @7319ms to org.apache.hbase.thirdparty.org.eclipse.jetty.util.log.Slf4jLog 2023-07-16 18:15:22,683 INFO [Listener at localhost/38073] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-16 18:15:22,684 INFO [Listener at localhost/38073] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-16 18:15:22,685 INFO [Listener at localhost/38073] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-16 18:15:22,687 INFO [Listener at localhost/38073] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-16 18:15:22,687 INFO [Listener at localhost/38073] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-16 18:15:22,687 INFO [Listener at localhost/38073] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-16 18:15:22,691 INFO [Listener at localhost/38073] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-16 18:15:22,765 INFO [Listener at localhost/38073] http.HttpServer(1146): Jetty bound to port 34821 2023-07-16 18:15:22,767 INFO [Listener at localhost/38073] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-16 18:15:22,817 INFO [Listener at localhost/38073] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 18:15:22,821 INFO [Listener at localhost/38073] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@1ee15e41{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1ac4d196-f4be-3944-d331-ea30eb493ba6/hadoop.log.dir/,AVAILABLE} 2023-07-16 18:15:22,822 INFO [Listener at localhost/38073] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 18:15:22,822 INFO [Listener at localhost/38073] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@3ebc6750{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-16 18:15:23,055 INFO [Listener at localhost/38073] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-16 18:15:23,068 INFO [Listener at localhost/38073] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-16 18:15:23,068 INFO [Listener at localhost/38073] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-16 18:15:23,070 INFO [Listener at localhost/38073] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-16 18:15:23,078 INFO [Listener at localhost/38073] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 18:15:23,104 INFO [Listener at localhost/38073] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@1f4374a9{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1ac4d196-f4be-3944-d331-ea30eb493ba6/java.io.tmpdir/jetty-0_0_0_0-34821-hbase-server-2_4_18-SNAPSHOT_jar-_-any-715962853867512848/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-16 18:15:23,117 INFO [Listener at localhost/38073] server.AbstractConnector(333): Started ServerConnector@5ee050b2{HTTP/1.1, (http/1.1)}{0.0.0.0:34821} 2023-07-16 18:15:23,117 INFO [Listener at localhost/38073] server.Server(415): Started @7896ms 2023-07-16 18:15:23,121 INFO [Listener at localhost/38073] master.HMaster(444): hbase.rootdir=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92, hbase.cluster.distributed=false 2023-07-16 18:15:23,220 INFO [Listener at localhost/38073] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-16 18:15:23,220 INFO [Listener at localhost/38073] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-16 18:15:23,220 INFO [Listener at localhost/38073] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-16 18:15:23,220 INFO [Listener at localhost/38073] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-16 18:15:23,221 INFO [Listener at localhost/38073] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-16 18:15:23,221 INFO [Listener at localhost/38073] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-16 18:15:23,229 INFO [Listener at localhost/38073] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-16 18:15:23,232 INFO [Listener at localhost/38073] ipc.NettyRpcServer(120): Bind to /172.31.14.131:33809 2023-07-16 18:15:23,236 INFO [Listener at localhost/38073] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-16 18:15:23,245 DEBUG [Listener at localhost/38073] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-16 18:15:23,246 INFO [Listener at localhost/38073] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-16 18:15:23,248 INFO [Listener at localhost/38073] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-16 18:15:23,250 INFO [Listener at localhost/38073] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:33809 connecting to ZooKeeper ensemble=127.0.0.1:53498 2023-07-16 18:15:23,254 DEBUG [Listener at localhost/38073-EventThread] zookeeper.ZKWatcher(600): regionserver:338090x0, quorum=127.0.0.1:53498, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-16 18:15:23,255 DEBUG [Listener at localhost/38073] zookeeper.ZKUtil(164): regionserver:338090x0, quorum=127.0.0.1:53498, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-16 18:15:23,255 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:33809-0x1016f588ace0001 connected 2023-07-16 18:15:23,257 DEBUG [Listener at localhost/38073] zookeeper.ZKUtil(164): regionserver:33809-0x1016f588ace0001, quorum=127.0.0.1:53498, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-16 18:15:23,258 DEBUG [Listener at localhost/38073] zookeeper.ZKUtil(164): regionserver:33809-0x1016f588ace0001, quorum=127.0.0.1:53498, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-16 18:15:23,258 DEBUG [Listener at localhost/38073] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=33809 2023-07-16 18:15:23,259 DEBUG [Listener at localhost/38073] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=33809 2023-07-16 18:15:23,259 DEBUG [Listener at localhost/38073] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=33809 2023-07-16 18:15:23,262 DEBUG [Listener at localhost/38073] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=33809 2023-07-16 18:15:23,263 DEBUG [Listener at localhost/38073] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=33809 2023-07-16 18:15:23,266 INFO [Listener at localhost/38073] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-16 18:15:23,266 INFO [Listener at localhost/38073] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-16 18:15:23,266 INFO [Listener at localhost/38073] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-16 18:15:23,268 INFO [Listener at localhost/38073] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-16 18:15:23,268 INFO [Listener at localhost/38073] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-16 18:15:23,268 INFO [Listener at localhost/38073] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-16 18:15:23,268 INFO [Listener at localhost/38073] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-16 18:15:23,270 INFO [Listener at localhost/38073] http.HttpServer(1146): Jetty bound to port 42095 2023-07-16 18:15:23,270 INFO [Listener at localhost/38073] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-16 18:15:23,275 INFO [Listener at localhost/38073] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 18:15:23,276 INFO [Listener at localhost/38073] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@4d5f0290{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1ac4d196-f4be-3944-d331-ea30eb493ba6/hadoop.log.dir/,AVAILABLE} 2023-07-16 18:15:23,276 INFO [Listener at localhost/38073] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 18:15:23,276 INFO [Listener at localhost/38073] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@43302c52{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-16 18:15:23,400 INFO [Listener at localhost/38073] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-16 18:15:23,402 INFO [Listener at localhost/38073] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-16 18:15:23,402 INFO [Listener at localhost/38073] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-16 18:15:23,402 INFO [Listener at localhost/38073] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-16 18:15:23,403 INFO [Listener at localhost/38073] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 18:15:23,407 INFO [Listener at localhost/38073] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@1cb98983{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1ac4d196-f4be-3944-d331-ea30eb493ba6/java.io.tmpdir/jetty-0_0_0_0-42095-hbase-server-2_4_18-SNAPSHOT_jar-_-any-3796704313274539397/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-16 18:15:23,409 INFO [Listener at localhost/38073] server.AbstractConnector(333): Started ServerConnector@38645c85{HTTP/1.1, (http/1.1)}{0.0.0.0:42095} 2023-07-16 18:15:23,409 INFO [Listener at localhost/38073] server.Server(415): Started @8188ms 2023-07-16 18:15:23,423 INFO [Listener at localhost/38073] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-16 18:15:23,423 INFO [Listener at localhost/38073] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-16 18:15:23,423 INFO [Listener at localhost/38073] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-16 18:15:23,424 INFO [Listener at localhost/38073] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-16 18:15:23,424 INFO [Listener at localhost/38073] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-16 18:15:23,424 INFO [Listener at localhost/38073] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-16 18:15:23,424 INFO [Listener at localhost/38073] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-16 18:15:23,426 INFO [Listener at localhost/38073] ipc.NettyRpcServer(120): Bind to /172.31.14.131:43375 2023-07-16 18:15:23,427 INFO [Listener at localhost/38073] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-16 18:15:23,428 DEBUG [Listener at localhost/38073] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-16 18:15:23,429 INFO [Listener at localhost/38073] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-16 18:15:23,431 INFO [Listener at localhost/38073] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-16 18:15:23,432 INFO [Listener at localhost/38073] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:43375 connecting to ZooKeeper ensemble=127.0.0.1:53498 2023-07-16 18:15:23,435 DEBUG [Listener at localhost/38073-EventThread] zookeeper.ZKWatcher(600): regionserver:433750x0, quorum=127.0.0.1:53498, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-16 18:15:23,436 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:43375-0x1016f588ace0002 connected 2023-07-16 18:15:23,436 DEBUG [Listener at localhost/38073] zookeeper.ZKUtil(164): regionserver:43375-0x1016f588ace0002, quorum=127.0.0.1:53498, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-16 18:15:23,437 DEBUG [Listener at localhost/38073] zookeeper.ZKUtil(164): regionserver:43375-0x1016f588ace0002, quorum=127.0.0.1:53498, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-16 18:15:23,438 DEBUG [Listener at localhost/38073] zookeeper.ZKUtil(164): regionserver:43375-0x1016f588ace0002, quorum=127.0.0.1:53498, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-16 18:15:23,438 DEBUG [Listener at localhost/38073] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=43375 2023-07-16 18:15:23,439 DEBUG [Listener at localhost/38073] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=43375 2023-07-16 18:15:23,439 DEBUG [Listener at localhost/38073] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=43375 2023-07-16 18:15:23,439 DEBUG [Listener at localhost/38073] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=43375 2023-07-16 18:15:23,439 DEBUG [Listener at localhost/38073] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=43375 2023-07-16 18:15:23,442 INFO [Listener at localhost/38073] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-16 18:15:23,442 INFO [Listener at localhost/38073] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-16 18:15:23,442 INFO [Listener at localhost/38073] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-16 18:15:23,442 INFO [Listener at localhost/38073] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-16 18:15:23,443 INFO [Listener at localhost/38073] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-16 18:15:23,443 INFO [Listener at localhost/38073] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-16 18:15:23,443 INFO [Listener at localhost/38073] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-16 18:15:23,444 INFO [Listener at localhost/38073] http.HttpServer(1146): Jetty bound to port 36707 2023-07-16 18:15:23,444 INFO [Listener at localhost/38073] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-16 18:15:23,446 INFO [Listener at localhost/38073] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 18:15:23,447 INFO [Listener at localhost/38073] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@1e5ca88b{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1ac4d196-f4be-3944-d331-ea30eb493ba6/hadoop.log.dir/,AVAILABLE} 2023-07-16 18:15:23,447 INFO [Listener at localhost/38073] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 18:15:23,447 INFO [Listener at localhost/38073] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@26906e56{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-16 18:15:23,573 INFO [Listener at localhost/38073] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-16 18:15:23,574 INFO [Listener at localhost/38073] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-16 18:15:23,574 INFO [Listener at localhost/38073] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-16 18:15:23,574 INFO [Listener at localhost/38073] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-16 18:15:23,575 INFO [Listener at localhost/38073] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 18:15:23,576 INFO [Listener at localhost/38073] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@5612588a{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1ac4d196-f4be-3944-d331-ea30eb493ba6/java.io.tmpdir/jetty-0_0_0_0-36707-hbase-server-2_4_18-SNAPSHOT_jar-_-any-6626196181628761921/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-16 18:15:23,578 INFO [Listener at localhost/38073] server.AbstractConnector(333): Started ServerConnector@587ce315{HTTP/1.1, (http/1.1)}{0.0.0.0:36707} 2023-07-16 18:15:23,578 INFO [Listener at localhost/38073] server.Server(415): Started @8357ms 2023-07-16 18:15:23,591 INFO [Listener at localhost/38073] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-16 18:15:23,591 INFO [Listener at localhost/38073] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-16 18:15:23,591 INFO [Listener at localhost/38073] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-16 18:15:23,591 INFO [Listener at localhost/38073] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-16 18:15:23,591 INFO [Listener at localhost/38073] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-16 18:15:23,591 INFO [Listener at localhost/38073] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-16 18:15:23,591 INFO [Listener at localhost/38073] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-16 18:15:23,593 INFO [Listener at localhost/38073] ipc.NettyRpcServer(120): Bind to /172.31.14.131:41927 2023-07-16 18:15:23,593 INFO [Listener at localhost/38073] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-16 18:15:23,595 DEBUG [Listener at localhost/38073] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-16 18:15:23,597 INFO [Listener at localhost/38073] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-16 18:15:23,599 INFO [Listener at localhost/38073] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-16 18:15:23,600 INFO [Listener at localhost/38073] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:41927 connecting to ZooKeeper ensemble=127.0.0.1:53498 2023-07-16 18:15:23,604 DEBUG [Listener at localhost/38073-EventThread] zookeeper.ZKWatcher(600): regionserver:419270x0, quorum=127.0.0.1:53498, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-16 18:15:23,605 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:41927-0x1016f588ace0003 connected 2023-07-16 18:15:23,605 DEBUG [Listener at localhost/38073] zookeeper.ZKUtil(164): regionserver:41927-0x1016f588ace0003, quorum=127.0.0.1:53498, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-16 18:15:23,606 DEBUG [Listener at localhost/38073] zookeeper.ZKUtil(164): regionserver:41927-0x1016f588ace0003, quorum=127.0.0.1:53498, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-16 18:15:23,607 DEBUG [Listener at localhost/38073] zookeeper.ZKUtil(164): regionserver:41927-0x1016f588ace0003, quorum=127.0.0.1:53498, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-16 18:15:23,608 DEBUG [Listener at localhost/38073] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=41927 2023-07-16 18:15:23,608 DEBUG [Listener at localhost/38073] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=41927 2023-07-16 18:15:23,608 DEBUG [Listener at localhost/38073] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=41927 2023-07-16 18:15:23,609 DEBUG [Listener at localhost/38073] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=41927 2023-07-16 18:15:23,609 DEBUG [Listener at localhost/38073] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=41927 2023-07-16 18:15:23,611 INFO [Listener at localhost/38073] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-16 18:15:23,612 INFO [Listener at localhost/38073] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-16 18:15:23,612 INFO [Listener at localhost/38073] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-16 18:15:23,612 INFO [Listener at localhost/38073] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-16 18:15:23,613 INFO [Listener at localhost/38073] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-16 18:15:23,613 INFO [Listener at localhost/38073] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-16 18:15:23,613 INFO [Listener at localhost/38073] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-16 18:15:23,614 INFO [Listener at localhost/38073] http.HttpServer(1146): Jetty bound to port 36951 2023-07-16 18:15:23,614 INFO [Listener at localhost/38073] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-16 18:15:23,616 INFO [Listener at localhost/38073] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 18:15:23,616 INFO [Listener at localhost/38073] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@53ca0225{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1ac4d196-f4be-3944-d331-ea30eb493ba6/hadoop.log.dir/,AVAILABLE} 2023-07-16 18:15:23,617 INFO [Listener at localhost/38073] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 18:15:23,617 INFO [Listener at localhost/38073] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@415a80d4{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-16 18:15:23,740 INFO [Listener at localhost/38073] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-16 18:15:23,741 INFO [Listener at localhost/38073] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-16 18:15:23,742 INFO [Listener at localhost/38073] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-16 18:15:23,742 INFO [Listener at localhost/38073] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-16 18:15:23,743 INFO [Listener at localhost/38073] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 18:15:23,744 INFO [Listener at localhost/38073] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@325896d4{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1ac4d196-f4be-3944-d331-ea30eb493ba6/java.io.tmpdir/jetty-0_0_0_0-36951-hbase-server-2_4_18-SNAPSHOT_jar-_-any-9191717505769663433/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-16 18:15:23,745 INFO [Listener at localhost/38073] server.AbstractConnector(333): Started ServerConnector@5248e24{HTTP/1.1, (http/1.1)}{0.0.0.0:36951} 2023-07-16 18:15:23,745 INFO [Listener at localhost/38073] server.Server(415): Started @8525ms 2023-07-16 18:15:23,751 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-16 18:15:23,754 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@10d7fd75{HTTP/1.1, (http/1.1)}{0.0.0.0:38541} 2023-07-16 18:15:23,755 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(415): Started @8534ms 2023-07-16 18:15:23,755 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,45445,1689531321197 2023-07-16 18:15:23,765 DEBUG [Listener at localhost/38073-EventThread] zookeeper.ZKWatcher(600): master:45445-0x1016f588ace0000, quorum=127.0.0.1:53498, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-16 18:15:23,766 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:45445-0x1016f588ace0000, quorum=127.0.0.1:53498, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,45445,1689531321197 2023-07-16 18:15:23,785 DEBUG [Listener at localhost/38073-EventThread] zookeeper.ZKWatcher(600): regionserver:33809-0x1016f588ace0001, quorum=127.0.0.1:53498, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-16 18:15:23,786 DEBUG [Listener at localhost/38073-EventThread] zookeeper.ZKWatcher(600): master:45445-0x1016f588ace0000, quorum=127.0.0.1:53498, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-16 18:15:23,786 DEBUG [Listener at localhost/38073-EventThread] zookeeper.ZKWatcher(600): regionserver:41927-0x1016f588ace0003, quorum=127.0.0.1:53498, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-16 18:15:23,786 DEBUG [Listener at localhost/38073-EventThread] zookeeper.ZKWatcher(600): regionserver:43375-0x1016f588ace0002, quorum=127.0.0.1:53498, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-16 18:15:23,786 DEBUG [Listener at localhost/38073-EventThread] zookeeper.ZKWatcher(600): master:45445-0x1016f588ace0000, quorum=127.0.0.1:53498, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-16 18:15:23,788 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:45445-0x1016f588ace0000, quorum=127.0.0.1:53498, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-16 18:15:23,789 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,45445,1689531321197 from backup master directory 2023-07-16 18:15:23,789 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:45445-0x1016f588ace0000, quorum=127.0.0.1:53498, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-16 18:15:23,795 DEBUG [Listener at localhost/38073-EventThread] zookeeper.ZKWatcher(600): master:45445-0x1016f588ace0000, quorum=127.0.0.1:53498, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,45445,1689531321197 2023-07-16 18:15:23,795 DEBUG [Listener at localhost/38073-EventThread] zookeeper.ZKWatcher(600): master:45445-0x1016f588ace0000, quorum=127.0.0.1:53498, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-16 18:15:23,796 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-16 18:15:23,796 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,45445,1689531321197 2023-07-16 18:15:23,800 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.ChunkCreator(497): Allocating data MemStoreChunkPool with chunk size 2 MB, max count 352, initial count 0 2023-07-16 18:15:23,802 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.ChunkCreator(497): Allocating index MemStoreChunkPool with chunk size 204.80 KB, max count 391, initial count 0 2023-07-16 18:15:23,912 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/hbase.id with ID: 750c298c-5826-454f-bacc-8ea8fc4f680a 2023-07-16 18:15:23,950 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-16 18:15:23,968 DEBUG [Listener at localhost/38073-EventThread] zookeeper.ZKWatcher(600): master:45445-0x1016f588ace0000, quorum=127.0.0.1:53498, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-16 18:15:24,020 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x4c0e5421 to 127.0.0.1:53498 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-16 18:15:24,046 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1b1a49e4, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-16 18:15:24,073 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-16 18:15:24,075 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-16 18:15:24,099 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(264): ClientProtocol::create wrong number of arguments, should be hadoop 3.2 or below 2023-07-16 18:15:24,099 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(270): ClientProtocol::create wrong number of arguments, should be hadoop 2.x 2023-07-16 18:15:24,101 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(279): can not find SHOULD_REPLICATE flag, should be hadoop 2.x java.lang.IllegalArgumentException: No enum constant org.apache.hadoop.fs.CreateFlag.SHOULD_REPLICATE at java.lang.Enum.valueOf(Enum.java:238) at org.apache.hadoop.fs.CreateFlag.valueOf(CreateFlag.java:63) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.loadShouldReplicateFlag(FanOutOneBlockAsyncDFSOutputHelper.java:277) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.(FanOutOneBlockAsyncDFSOutputHelper.java:304) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.wal.AsyncFSWALProvider.load(AsyncFSWALProvider.java:139) at org.apache.hadoop.hbase.wal.WALFactory.getProviderClass(WALFactory.java:135) at org.apache.hadoop.hbase.wal.WALFactory.getProvider(WALFactory.java:175) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:202) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:182) at org.apache.hadoop.hbase.master.region.MasterRegion.create(MasterRegion.java:339) at org.apache.hadoop.hbase.master.region.MasterRegionFactory.create(MasterRegionFactory.java:104) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:855) at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2193) at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:528) at java.lang.Thread.run(Thread.java:750) 2023-07-16 18:15:24,106 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(243): No decryptEncryptedDataEncryptionKey method in DFSClient, should be hadoop version with HDFS-12396 java.lang.NoSuchMethodException: org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(org.apache.hadoop.fs.FileEncryptionInfo) at java.lang.Class.getDeclaredMethod(Class.java:2130) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.createTransparentCryptoHelperWithoutHDFS12396(FanOutOneBlockAsyncDFSOutputSaslHelper.java:182) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.createTransparentCryptoHelper(FanOutOneBlockAsyncDFSOutputSaslHelper.java:241) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.(FanOutOneBlockAsyncDFSOutputSaslHelper.java:252) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.wal.AsyncFSWALProvider.load(AsyncFSWALProvider.java:140) at org.apache.hadoop.hbase.wal.WALFactory.getProviderClass(WALFactory.java:135) at org.apache.hadoop.hbase.wal.WALFactory.getProvider(WALFactory.java:175) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:202) at org.apache.hadoop.hbase.wal.WALFactory.(WALFactory.java:182) at org.apache.hadoop.hbase.master.region.MasterRegion.create(MasterRegion.java:339) at org.apache.hadoop.hbase.master.region.MasterRegionFactory.create(MasterRegionFactory.java:104) at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:855) at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2193) at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:528) at java.lang.Thread.run(Thread.java:750) 2023-07-16 18:15:24,108 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-16 18:15:24,151 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/MasterData/data/master/store-tmp 2023-07-16 18:15:24,189 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 18:15:24,190 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-16 18:15:24,190 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-16 18:15:24,190 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-16 18:15:24,190 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-16 18:15:24,190 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-16 18:15:24,190 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-16 18:15:24,190 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-16 18:15:24,192 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/MasterData/WALs/jenkins-hbase4.apache.org,45445,1689531321197 2023-07-16 18:15:24,214 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C45445%2C1689531321197, suffix=, logDir=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/MasterData/WALs/jenkins-hbase4.apache.org,45445,1689531321197, archiveDir=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/MasterData/oldWALs, maxLogs=10 2023-07-16 18:15:24,269 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36769,DS-6ab893e3-d584-4af5-92b1-09319086e884,DISK] 2023-07-16 18:15:24,269 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37737,DS-918fb536-8f98-483c-9977-d31c79d2a9f3,DISK] 2023-07-16 18:15:24,269 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42677,DS-26280529-f724-4ea2-95a9-ef1b4940f60b,DISK] 2023-07-16 18:15:24,278 DEBUG [RS-EventLoopGroup-5-1] asyncfs.ProtobufDecoder(123): Hadoop 3.2 and below use unshaded protobuf. java.lang.ClassNotFoundException: org.apache.hadoop.thirdparty.protobuf.MessageLite at java.net.URLClassLoader.findClass(URLClassLoader.java:387) at java.lang.ClassLoader.loadClass(ClassLoader.java:418) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352) at java.lang.ClassLoader.loadClass(ClassLoader.java:351) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.hadoop.hbase.io.asyncfs.ProtobufDecoder.(ProtobufDecoder.java:118) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.processWriteBlockResponse(FanOutOneBlockAsyncDFSOutputHelper.java:340) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$100(FanOutOneBlockAsyncDFSOutputHelper.java:112) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$4.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:424) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:557) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:492) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.addListener(DefaultPromise.java:185) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.initialize(FanOutOneBlockAsyncDFSOutputHelper.java:418) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$300(FanOutOneBlockAsyncDFSOutputHelper.java:112) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$5.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:476) at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$5.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:471) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:583) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:559) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:492) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:636) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.setSuccess0(DefaultPromise.java:625) at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:105) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:84) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.fulfillConnectPromise(AbstractEpollChannel.java:653) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:691) at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:567) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:489) at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:397) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:750) 2023-07-16 18:15:24,354 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/MasterData/WALs/jenkins-hbase4.apache.org,45445,1689531321197/jenkins-hbase4.apache.org%2C45445%2C1689531321197.1689531324225 2023-07-16 18:15:24,355 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:42677,DS-26280529-f724-4ea2-95a9-ef1b4940f60b,DISK], DatanodeInfoWithStorage[127.0.0.1:36769,DS-6ab893e3-d584-4af5-92b1-09319086e884,DISK], DatanodeInfoWithStorage[127.0.0.1:37737,DS-918fb536-8f98-483c-9977-d31c79d2a9f3,DISK]] 2023-07-16 18:15:24,355 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-16 18:15:24,356 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 18:15:24,360 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-16 18:15:24,361 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-16 18:15:24,427 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-16 18:15:24,435 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-16 18:15:24,465 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-16 18:15:24,479 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 18:15:24,485 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-16 18:15:24,487 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-16 18:15:24,507 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-16 18:15:24,511 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-16 18:15:24,512 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11352518880, jitterRate=0.05728571116924286}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 18:15:24,513 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-16 18:15:24,514 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-16 18:15:24,539 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-16 18:15:24,539 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-16 18:15:24,543 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-16 18:15:24,545 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 1 msec 2023-07-16 18:15:24,585 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 40 msec 2023-07-16 18:15:24,586 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-16 18:15:24,613 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-07-16 18:15:24,619 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-07-16 18:15:24,628 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:45445-0x1016f588ace0000, quorum=127.0.0.1:53498, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-07-16 18:15:24,636 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-16 18:15:24,642 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:45445-0x1016f588ace0000, quorum=127.0.0.1:53498, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-16 18:15:24,645 DEBUG [Listener at localhost/38073-EventThread] zookeeper.ZKWatcher(600): master:45445-0x1016f588ace0000, quorum=127.0.0.1:53498, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-16 18:15:24,646 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:45445-0x1016f588ace0000, quorum=127.0.0.1:53498, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-16 18:15:24,647 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:45445-0x1016f588ace0000, quorum=127.0.0.1:53498, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-16 18:15:24,663 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:45445-0x1016f588ace0000, quorum=127.0.0.1:53498, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-16 18:15:24,669 DEBUG [Listener at localhost/38073-EventThread] zookeeper.ZKWatcher(600): regionserver:43375-0x1016f588ace0002, quorum=127.0.0.1:53498, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-16 18:15:24,669 DEBUG [Listener at localhost/38073-EventThread] zookeeper.ZKWatcher(600): regionserver:33809-0x1016f588ace0001, quorum=127.0.0.1:53498, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-16 18:15:24,669 DEBUG [Listener at localhost/38073-EventThread] zookeeper.ZKWatcher(600): regionserver:41927-0x1016f588ace0003, quorum=127.0.0.1:53498, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-16 18:15:24,669 DEBUG [Listener at localhost/38073-EventThread] zookeeper.ZKWatcher(600): master:45445-0x1016f588ace0000, quorum=127.0.0.1:53498, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-16 18:15:24,669 DEBUG [Listener at localhost/38073-EventThread] zookeeper.ZKWatcher(600): master:45445-0x1016f588ace0000, quorum=127.0.0.1:53498, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-16 18:15:24,670 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,45445,1689531321197, sessionid=0x1016f588ace0000, setting cluster-up flag (Was=false) 2023-07-16 18:15:24,687 DEBUG [Listener at localhost/38073-EventThread] zookeeper.ZKWatcher(600): master:45445-0x1016f588ace0000, quorum=127.0.0.1:53498, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-16 18:15:24,693 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-16 18:15:24,694 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,45445,1689531321197 2023-07-16 18:15:24,699 DEBUG [Listener at localhost/38073-EventThread] zookeeper.ZKWatcher(600): master:45445-0x1016f588ace0000, quorum=127.0.0.1:53498, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-16 18:15:24,705 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-16 18:15:24,706 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,45445,1689531321197 2023-07-16 18:15:24,709 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.hbase-snapshot/.tmp 2023-07-16 18:15:24,749 INFO [RS:1;jenkins-hbase4:43375] regionserver.HRegionServer(951): ClusterId : 750c298c-5826-454f-bacc-8ea8fc4f680a 2023-07-16 18:15:24,749 INFO [RS:2;jenkins-hbase4:41927] regionserver.HRegionServer(951): ClusterId : 750c298c-5826-454f-bacc-8ea8fc4f680a 2023-07-16 18:15:24,749 INFO [RS:0;jenkins-hbase4:33809] regionserver.HRegionServer(951): ClusterId : 750c298c-5826-454f-bacc-8ea8fc4f680a 2023-07-16 18:15:24,757 DEBUG [RS:0;jenkins-hbase4:33809] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-16 18:15:24,757 DEBUG [RS:2;jenkins-hbase4:41927] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-16 18:15:24,757 DEBUG [RS:1;jenkins-hbase4:43375] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-16 18:15:24,787 DEBUG [RS:0;jenkins-hbase4:33809] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-16 18:15:24,787 DEBUG [RS:1;jenkins-hbase4:43375] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-16 18:15:24,787 DEBUG [RS:2;jenkins-hbase4:41927] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-16 18:15:24,788 DEBUG [RS:1;jenkins-hbase4:43375] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-16 18:15:24,788 DEBUG [RS:0;jenkins-hbase4:33809] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-16 18:15:24,788 DEBUG [RS:2;jenkins-hbase4:41927] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-16 18:15:24,796 DEBUG [RS:2;jenkins-hbase4:41927] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-16 18:15:24,796 DEBUG [RS:0;jenkins-hbase4:33809] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-16 18:15:24,796 DEBUG [RS:1;jenkins-hbase4:43375] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-16 18:15:24,798 DEBUG [RS:0;jenkins-hbase4:33809] zookeeper.ReadOnlyZKClient(139): Connect 0x7edfaaa7 to 127.0.0.1:53498 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-16 18:15:24,798 DEBUG [RS:2;jenkins-hbase4:41927] zookeeper.ReadOnlyZKClient(139): Connect 0x06110cb7 to 127.0.0.1:53498 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-16 18:15:24,799 DEBUG [RS:1;jenkins-hbase4:43375] zookeeper.ReadOnlyZKClient(139): Connect 0x40207005 to 127.0.0.1:53498 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-16 18:15:24,808 DEBUG [RS:2;jenkins-hbase4:41927] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@8d6e205, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-16 18:15:24,809 DEBUG [RS:1;jenkins-hbase4:43375] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@522e665, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-16 18:15:24,809 DEBUG [RS:0;jenkins-hbase4:33809] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@ca035f0, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-16 18:15:24,809 DEBUG [RS:2;jenkins-hbase4:41927] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@ce967c7, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-16 18:15:24,809 DEBUG [RS:1;jenkins-hbase4:43375] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@fab4607, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-16 18:15:24,809 DEBUG [RS:0;jenkins-hbase4:33809] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@5c737d24, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-16 18:15:24,818 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2930): Registered master coprocessor service: service=RSGroupAdminService 2023-07-16 18:15:24,827 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(537): Refreshing in Offline mode. 2023-07-16 18:15:24,831 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,45445,1689531321197] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-16 18:15:24,833 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint loaded, priority=536870911. 2023-07-16 18:15:24,834 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver loaded, priority=536870912. 2023-07-16 18:15:24,834 DEBUG [RS:0;jenkins-hbase4:33809] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:33809 2023-07-16 18:15:24,837 DEBUG [RS:2;jenkins-hbase4:41927] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase4:41927 2023-07-16 18:15:24,837 DEBUG [RS:1;jenkins-hbase4:43375] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase4:43375 2023-07-16 18:15:24,840 INFO [RS:0;jenkins-hbase4:33809] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-16 18:15:24,840 INFO [RS:2;jenkins-hbase4:41927] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-16 18:15:24,841 INFO [RS:2;jenkins-hbase4:41927] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-16 18:15:24,840 INFO [RS:1;jenkins-hbase4:43375] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-16 18:15:24,842 DEBUG [RS:2;jenkins-hbase4:41927] regionserver.HRegionServer(1022): About to register with Master. 2023-07-16 18:15:24,841 INFO [RS:0;jenkins-hbase4:33809] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-16 18:15:24,842 INFO [RS:1;jenkins-hbase4:43375] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-16 18:15:24,842 DEBUG [RS:0;jenkins-hbase4:33809] regionserver.HRegionServer(1022): About to register with Master. 2023-07-16 18:15:24,842 DEBUG [RS:1;jenkins-hbase4:43375] regionserver.HRegionServer(1022): About to register with Master. 2023-07-16 18:15:24,845 INFO [RS:2;jenkins-hbase4:41927] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,45445,1689531321197 with isa=jenkins-hbase4.apache.org/172.31.14.131:41927, startcode=1689531323590 2023-07-16 18:15:24,845 INFO [RS:0;jenkins-hbase4:33809] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,45445,1689531321197 with isa=jenkins-hbase4.apache.org/172.31.14.131:33809, startcode=1689531323219 2023-07-16 18:15:24,845 INFO [RS:1;jenkins-hbase4:43375] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,45445,1689531321197 with isa=jenkins-hbase4.apache.org/172.31.14.131:43375, startcode=1689531323422 2023-07-16 18:15:24,865 DEBUG [RS:1;jenkins-hbase4:43375] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-16 18:15:24,865 DEBUG [RS:0;jenkins-hbase4:33809] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-16 18:15:24,865 DEBUG [RS:2;jenkins-hbase4:41927] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-16 18:15:24,938 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-07-16 18:15:24,940 INFO [RS-EventLoopGroup-1-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:59619, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.1 (auth:SIMPLE), service=RegionServerStatusService 2023-07-16 18:15:24,940 INFO [RS-EventLoopGroup-1-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:60325, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.0 (auth:SIMPLE), service=RegionServerStatusService 2023-07-16 18:15:24,940 INFO [RS-EventLoopGroup-1-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:34513, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.2 (auth:SIMPLE), service=RegionServerStatusService 2023-07-16 18:15:24,951 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=45445] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 18:15:24,962 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=45445] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 18:15:24,964 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=45445] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2832) at org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:579) at org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:15952) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 18:15:24,986 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-16 18:15:24,991 DEBUG [RS:0;jenkins-hbase4:33809] regionserver.HRegionServer(2830): Master is not running yet 2023-07-16 18:15:24,991 DEBUG [RS:2;jenkins-hbase4:41927] regionserver.HRegionServer(2830): Master is not running yet 2023-07-16 18:15:24,991 WARN [RS:0;jenkins-hbase4:33809] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-16 18:15:24,991 DEBUG [RS:1;jenkins-hbase4:43375] regionserver.HRegionServer(2830): Master is not running yet 2023-07-16 18:15:24,991 WARN [RS:2;jenkins-hbase4:41927] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-16 18:15:24,992 WARN [RS:1;jenkins-hbase4:43375] regionserver.HRegionServer(1030): reportForDuty failed; sleeping 100 ms and then retrying. 2023-07-16 18:15:24,992 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-16 18:15:24,992 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-16 18:15:24,992 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-16 18:15:24,994 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-16 18:15:24,994 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-16 18:15:24,994 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-16 18:15:24,994 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-16 18:15:24,994 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-07-16 18:15:24,994 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 18:15:24,994 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-16 18:15:24,994 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 18:15:24,996 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1689531354995 2023-07-16 18:15:24,998 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-16 18:15:25,002 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-16 18:15:25,003 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-07-16 18:15:25,005 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-07-16 18:15:25,008 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-16 18:15:25,011 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-16 18:15:25,011 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-16 18:15:25,012 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-16 18:15:25,012 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-16 18:15:25,015 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-16 18:15:25,017 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-16 18:15:25,019 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-16 18:15:25,020 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-16 18:15:25,024 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-16 18:15:25,024 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-16 18:15:25,030 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689531325026,5,FailOnTimeoutGroup] 2023-07-16 18:15:25,031 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689531325031,5,FailOnTimeoutGroup] 2023-07-16 18:15:25,031 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-16 18:15:25,031 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-16 18:15:25,033 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-16 18:15:25,033 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-16 18:15:25,087 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-16 18:15:25,088 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-16 18:15:25,089 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92 2023-07-16 18:15:25,092 INFO [RS:0;jenkins-hbase4:33809] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,45445,1689531321197 with isa=jenkins-hbase4.apache.org/172.31.14.131:33809, startcode=1689531323219 2023-07-16 18:15:25,093 INFO [RS:2;jenkins-hbase4:41927] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,45445,1689531321197 with isa=jenkins-hbase4.apache.org/172.31.14.131:41927, startcode=1689531323590 2023-07-16 18:15:25,093 INFO [RS:1;jenkins-hbase4:43375] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,45445,1689531321197 with isa=jenkins-hbase4.apache.org/172.31.14.131:43375, startcode=1689531323422 2023-07-16 18:15:25,099 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=45445] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,43375,1689531323422 2023-07-16 18:15:25,100 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,45445,1689531321197] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-16 18:15:25,102 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,45445,1689531321197] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-16 18:15:25,110 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=45445] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,33809,1689531323219 2023-07-16 18:15:25,110 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,45445,1689531321197] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-16 18:15:25,111 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,45445,1689531321197] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 2 2023-07-16 18:15:25,115 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=45445] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,41927,1689531323590 2023-07-16 18:15:25,115 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,45445,1689531321197] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-16 18:15:25,115 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,45445,1689531321197] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-16 18:15:25,116 DEBUG [RS:0;jenkins-hbase4:33809] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92 2023-07-16 18:15:25,116 DEBUG [RS:1;jenkins-hbase4:43375] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92 2023-07-16 18:15:25,116 DEBUG [RS:0;jenkins-hbase4:33809] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:36523 2023-07-16 18:15:25,116 DEBUG [RS:1;jenkins-hbase4:43375] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:36523 2023-07-16 18:15:25,117 DEBUG [RS:1;jenkins-hbase4:43375] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=34821 2023-07-16 18:15:25,117 DEBUG [RS:0;jenkins-hbase4:33809] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=34821 2023-07-16 18:15:25,117 DEBUG [RS:2;jenkins-hbase4:41927] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92 2023-07-16 18:15:25,118 DEBUG [RS:2;jenkins-hbase4:41927] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:36523 2023-07-16 18:15:25,118 DEBUG [RS:2;jenkins-hbase4:41927] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=34821 2023-07-16 18:15:25,122 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 18:15:25,126 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-16 18:15:25,129 DEBUG [Listener at localhost/38073-EventThread] zookeeper.ZKWatcher(600): master:45445-0x1016f588ace0000, quorum=127.0.0.1:53498, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-16 18:15:25,131 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/hbase/meta/1588230740/info 2023-07-16 18:15:25,132 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-16 18:15:25,132 DEBUG [RS:1;jenkins-hbase4:43375] zookeeper.ZKUtil(162): regionserver:43375-0x1016f588ace0002, quorum=127.0.0.1:53498, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43375,1689531323422 2023-07-16 18:15:25,132 DEBUG [RS:2;jenkins-hbase4:41927] zookeeper.ZKUtil(162): regionserver:41927-0x1016f588ace0003, quorum=127.0.0.1:53498, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41927,1689531323590 2023-07-16 18:15:25,132 DEBUG [RS:0;jenkins-hbase4:33809] zookeeper.ZKUtil(162): regionserver:33809-0x1016f588ace0001, quorum=127.0.0.1:53498, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33809,1689531323219 2023-07-16 18:15:25,135 WARN [RS:2;jenkins-hbase4:41927] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-16 18:15:25,132 WARN [RS:1;jenkins-hbase4:43375] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-16 18:15:25,136 INFO [RS:2;jenkins-hbase4:41927] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-16 18:15:25,136 WARN [RS:0;jenkins-hbase4:33809] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-16 18:15:25,136 INFO [RS:1;jenkins-hbase4:43375] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-16 18:15:25,137 INFO [RS:0;jenkins-hbase4:33809] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-16 18:15:25,137 DEBUG [RS:2;jenkins-hbase4:41927] regionserver.HRegionServer(1948): logDir=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/WALs/jenkins-hbase4.apache.org,41927,1689531323590 2023-07-16 18:15:25,137 DEBUG [RS:0;jenkins-hbase4:33809] regionserver.HRegionServer(1948): logDir=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/WALs/jenkins-hbase4.apache.org,33809,1689531323219 2023-07-16 18:15:25,137 DEBUG [RS:1;jenkins-hbase4:43375] regionserver.HRegionServer(1948): logDir=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/WALs/jenkins-hbase4.apache.org,43375,1689531323422 2023-07-16 18:15:25,137 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 18:15:25,138 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,33809,1689531323219] 2023-07-16 18:15:25,138 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-16 18:15:25,138 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,43375,1689531323422] 2023-07-16 18:15:25,138 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,41927,1689531323590] 2023-07-16 18:15:25,148 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/hbase/meta/1588230740/rep_barrier 2023-07-16 18:15:25,149 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-16 18:15:25,150 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 18:15:25,151 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-16 18:15:25,153 DEBUG [RS:0;jenkins-hbase4:33809] zookeeper.ZKUtil(162): regionserver:33809-0x1016f588ace0001, quorum=127.0.0.1:53498, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33809,1689531323219 2023-07-16 18:15:25,154 DEBUG [RS:2;jenkins-hbase4:41927] zookeeper.ZKUtil(162): regionserver:41927-0x1016f588ace0003, quorum=127.0.0.1:53498, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33809,1689531323219 2023-07-16 18:15:25,154 DEBUG [RS:1;jenkins-hbase4:43375] zookeeper.ZKUtil(162): regionserver:43375-0x1016f588ace0002, quorum=127.0.0.1:53498, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33809,1689531323219 2023-07-16 18:15:25,154 DEBUG [RS:0;jenkins-hbase4:33809] zookeeper.ZKUtil(162): regionserver:33809-0x1016f588ace0001, quorum=127.0.0.1:53498, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43375,1689531323422 2023-07-16 18:15:25,154 DEBUG [RS:2;jenkins-hbase4:41927] zookeeper.ZKUtil(162): regionserver:41927-0x1016f588ace0003, quorum=127.0.0.1:53498, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43375,1689531323422 2023-07-16 18:15:25,155 DEBUG [RS:1;jenkins-hbase4:43375] zookeeper.ZKUtil(162): regionserver:43375-0x1016f588ace0002, quorum=127.0.0.1:53498, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43375,1689531323422 2023-07-16 18:15:25,155 DEBUG [RS:0;jenkins-hbase4:33809] zookeeper.ZKUtil(162): regionserver:33809-0x1016f588ace0001, quorum=127.0.0.1:53498, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41927,1689531323590 2023-07-16 18:15:25,155 DEBUG [RS:2;jenkins-hbase4:41927] zookeeper.ZKUtil(162): regionserver:41927-0x1016f588ace0003, quorum=127.0.0.1:53498, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41927,1689531323590 2023-07-16 18:15:25,155 DEBUG [RS:1;jenkins-hbase4:43375] zookeeper.ZKUtil(162): regionserver:43375-0x1016f588ace0002, quorum=127.0.0.1:53498, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41927,1689531323590 2023-07-16 18:15:25,157 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/hbase/meta/1588230740/table 2023-07-16 18:15:25,157 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-16 18:15:25,159 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 18:15:25,160 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/hbase/meta/1588230740 2023-07-16 18:15:25,161 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/hbase/meta/1588230740 2023-07-16 18:15:25,170 DEBUG [RS:0;jenkins-hbase4:33809] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-16 18:15:25,171 DEBUG [RS:1;jenkins-hbase4:43375] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-16 18:15:25,170 DEBUG [RS:2;jenkins-hbase4:41927] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-16 18:15:25,171 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-16 18:15:25,177 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-16 18:15:25,184 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-16 18:15:25,186 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11532700800, jitterRate=0.07406646013259888}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-16 18:15:25,186 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-16 18:15:25,186 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-16 18:15:25,186 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-16 18:15:25,186 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-16 18:15:25,186 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-16 18:15:25,186 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-16 18:15:25,187 INFO [RS:2;jenkins-hbase4:41927] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-16 18:15:25,187 INFO [RS:0;jenkins-hbase4:33809] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-16 18:15:25,187 INFO [RS:1;jenkins-hbase4:43375] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-16 18:15:25,193 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-16 18:15:25,193 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-16 18:15:25,201 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-07-16 18:15:25,202 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-07-16 18:15:25,217 INFO [RS:0;jenkins-hbase4:33809] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-16 18:15:25,219 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-16 18:15:25,220 INFO [RS:2;jenkins-hbase4:41927] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-16 18:15:25,220 INFO [RS:1;jenkins-hbase4:43375] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-16 18:15:25,234 INFO [RS:2;jenkins-hbase4:41927] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-16 18:15:25,234 INFO [RS:1;jenkins-hbase4:43375] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-16 18:15:25,234 INFO [RS:2;jenkins-hbase4:41927] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-16 18:15:25,234 INFO [RS:0;jenkins-hbase4:33809] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-16 18:15:25,245 INFO [RS:1;jenkins-hbase4:43375] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-16 18:15:25,245 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-16 18:15:25,245 INFO [RS:0;jenkins-hbase4:33809] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-16 18:15:25,248 INFO [RS:0;jenkins-hbase4:33809] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-16 18:15:25,248 INFO [RS:1;jenkins-hbase4:43375] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-16 18:15:25,248 INFO [RS:2;jenkins-hbase4:41927] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-16 18:15:25,251 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-07-16 18:15:25,261 INFO [RS:1;jenkins-hbase4:43375] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-16 18:15:25,261 INFO [RS:0;jenkins-hbase4:33809] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-16 18:15:25,261 INFO [RS:2;jenkins-hbase4:41927] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-16 18:15:25,261 DEBUG [RS:1;jenkins-hbase4:43375] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 18:15:25,262 DEBUG [RS:0;jenkins-hbase4:33809] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 18:15:25,262 DEBUG [RS:1;jenkins-hbase4:43375] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 18:15:25,262 DEBUG [RS:0;jenkins-hbase4:33809] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 18:15:25,262 DEBUG [RS:2;jenkins-hbase4:41927] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 18:15:25,262 DEBUG [RS:0;jenkins-hbase4:33809] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 18:15:25,262 DEBUG [RS:2;jenkins-hbase4:41927] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 18:15:25,262 DEBUG [RS:0;jenkins-hbase4:33809] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 18:15:25,262 DEBUG [RS:2;jenkins-hbase4:41927] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 18:15:25,262 DEBUG [RS:0;jenkins-hbase4:33809] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 18:15:25,262 DEBUG [RS:2;jenkins-hbase4:41927] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 18:15:25,262 DEBUG [RS:0;jenkins-hbase4:33809] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-16 18:15:25,262 DEBUG [RS:2;jenkins-hbase4:41927] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 18:15:25,262 DEBUG [RS:0;jenkins-hbase4:33809] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 18:15:25,263 DEBUG [RS:2;jenkins-hbase4:41927] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-16 18:15:25,262 DEBUG [RS:1;jenkins-hbase4:43375] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 18:15:25,263 DEBUG [RS:2;jenkins-hbase4:41927] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 18:15:25,263 DEBUG [RS:1;jenkins-hbase4:43375] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 18:15:25,263 DEBUG [RS:2;jenkins-hbase4:41927] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 18:15:25,263 DEBUG [RS:1;jenkins-hbase4:43375] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 18:15:25,263 DEBUG [RS:2;jenkins-hbase4:41927] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 18:15:25,263 DEBUG [RS:1;jenkins-hbase4:43375] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-16 18:15:25,263 DEBUG [RS:2;jenkins-hbase4:41927] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 18:15:25,263 DEBUG [RS:1;jenkins-hbase4:43375] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 18:15:25,263 DEBUG [RS:0;jenkins-hbase4:33809] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 18:15:25,263 DEBUG [RS:1;jenkins-hbase4:43375] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 18:15:25,263 DEBUG [RS:0;jenkins-hbase4:33809] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 18:15:25,264 DEBUG [RS:1;jenkins-hbase4:43375] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 18:15:25,264 DEBUG [RS:0;jenkins-hbase4:33809] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 18:15:25,264 DEBUG [RS:1;jenkins-hbase4:43375] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 18:15:25,265 INFO [RS:2;jenkins-hbase4:41927] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-16 18:15:25,265 INFO [RS:2;jenkins-hbase4:41927] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-16 18:15:25,265 INFO [RS:2;jenkins-hbase4:41927] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-16 18:15:25,265 INFO [RS:0;jenkins-hbase4:33809] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-16 18:15:25,265 INFO [RS:0;jenkins-hbase4:33809] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-16 18:15:25,266 INFO [RS:0;jenkins-hbase4:33809] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-16 18:15:25,266 INFO [RS:1;jenkins-hbase4:43375] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-16 18:15:25,266 INFO [RS:1;jenkins-hbase4:43375] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-16 18:15:25,266 INFO [RS:1;jenkins-hbase4:43375] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-16 18:15:25,282 INFO [RS:0;jenkins-hbase4:33809] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-16 18:15:25,282 INFO [RS:1;jenkins-hbase4:43375] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-16 18:15:25,282 INFO [RS:2;jenkins-hbase4:41927] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-16 18:15:25,287 INFO [RS:0;jenkins-hbase4:33809] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,33809,1689531323219-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-16 18:15:25,287 INFO [RS:2;jenkins-hbase4:41927] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,41927,1689531323590-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-16 18:15:25,288 INFO [RS:1;jenkins-hbase4:43375] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,43375,1689531323422-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-16 18:15:25,318 INFO [RS:2;jenkins-hbase4:41927] regionserver.Replication(203): jenkins-hbase4.apache.org,41927,1689531323590 started 2023-07-16 18:15:25,318 INFO [RS:2;jenkins-hbase4:41927] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,41927,1689531323590, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:41927, sessionid=0x1016f588ace0003 2023-07-16 18:15:25,324 INFO [RS:0;jenkins-hbase4:33809] regionserver.Replication(203): jenkins-hbase4.apache.org,33809,1689531323219 started 2023-07-16 18:15:25,324 INFO [RS:0;jenkins-hbase4:33809] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,33809,1689531323219, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:33809, sessionid=0x1016f588ace0001 2023-07-16 18:15:25,324 DEBUG [RS:0;jenkins-hbase4:33809] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-16 18:15:25,324 DEBUG [RS:0;jenkins-hbase4:33809] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,33809,1689531323219 2023-07-16 18:15:25,324 DEBUG [RS:0;jenkins-hbase4:33809] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,33809,1689531323219' 2023-07-16 18:15:25,324 DEBUG [RS:0;jenkins-hbase4:33809] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-16 18:15:25,325 DEBUG [RS:2;jenkins-hbase4:41927] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-16 18:15:25,325 DEBUG [RS:2;jenkins-hbase4:41927] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,41927,1689531323590 2023-07-16 18:15:25,327 INFO [RS:1;jenkins-hbase4:43375] regionserver.Replication(203): jenkins-hbase4.apache.org,43375,1689531323422 started 2023-07-16 18:15:25,327 DEBUG [RS:2;jenkins-hbase4:41927] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,41927,1689531323590' 2023-07-16 18:15:25,334 DEBUG [RS:2;jenkins-hbase4:41927] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-16 18:15:25,334 INFO [RS:1;jenkins-hbase4:43375] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,43375,1689531323422, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:43375, sessionid=0x1016f588ace0002 2023-07-16 18:15:25,334 DEBUG [RS:0;jenkins-hbase4:33809] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-16 18:15:25,334 DEBUG [RS:1;jenkins-hbase4:43375] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-16 18:15:25,334 DEBUG [RS:1;jenkins-hbase4:43375] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,43375,1689531323422 2023-07-16 18:15:25,335 DEBUG [RS:1;jenkins-hbase4:43375] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,43375,1689531323422' 2023-07-16 18:15:25,335 DEBUG [RS:1;jenkins-hbase4:43375] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-16 18:15:25,335 DEBUG [RS:2;jenkins-hbase4:41927] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-16 18:15:25,335 DEBUG [RS:2;jenkins-hbase4:41927] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-16 18:15:25,335 DEBUG [RS:0;jenkins-hbase4:33809] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-16 18:15:25,336 DEBUG [RS:0;jenkins-hbase4:33809] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-16 18:15:25,336 DEBUG [RS:1;jenkins-hbase4:43375] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-16 18:15:25,336 DEBUG [RS:2;jenkins-hbase4:41927] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-16 18:15:25,336 DEBUG [RS:2;jenkins-hbase4:41927] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,41927,1689531323590 2023-07-16 18:15:25,336 DEBUG [RS:2;jenkins-hbase4:41927] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,41927,1689531323590' 2023-07-16 18:15:25,336 DEBUG [RS:2;jenkins-hbase4:41927] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-16 18:15:25,336 DEBUG [RS:0;jenkins-hbase4:33809] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,33809,1689531323219 2023-07-16 18:15:25,336 DEBUG [RS:0;jenkins-hbase4:33809] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,33809,1689531323219' 2023-07-16 18:15:25,336 DEBUG [RS:0;jenkins-hbase4:33809] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-16 18:15:25,337 DEBUG [RS:2;jenkins-hbase4:41927] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-16 18:15:25,337 DEBUG [RS:0;jenkins-hbase4:33809] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-16 18:15:25,337 DEBUG [RS:1;jenkins-hbase4:43375] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-16 18:15:25,337 DEBUG [RS:2;jenkins-hbase4:41927] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-16 18:15:25,337 DEBUG [RS:1;jenkins-hbase4:43375] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-16 18:15:25,337 INFO [RS:2;jenkins-hbase4:41927] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-16 18:15:25,337 DEBUG [RS:1;jenkins-hbase4:43375] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,43375,1689531323422 2023-07-16 18:15:25,338 INFO [RS:2;jenkins-hbase4:41927] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-16 18:15:25,338 DEBUG [RS:1;jenkins-hbase4:43375] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,43375,1689531323422' 2023-07-16 18:15:25,338 DEBUG [RS:1;jenkins-hbase4:43375] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-16 18:15:25,338 DEBUG [RS:0;jenkins-hbase4:33809] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-16 18:15:25,338 DEBUG [RS:1;jenkins-hbase4:43375] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-16 18:15:25,339 INFO [RS:0;jenkins-hbase4:33809] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-16 18:15:25,339 INFO [RS:0;jenkins-hbase4:33809] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-16 18:15:25,339 DEBUG [RS:1;jenkins-hbase4:43375] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-16 18:15:25,339 INFO [RS:1;jenkins-hbase4:43375] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-16 18:15:25,339 INFO [RS:1;jenkins-hbase4:43375] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-16 18:15:25,403 DEBUG [jenkins-hbase4:45445] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-16 18:15:25,421 DEBUG [jenkins-hbase4:45445] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-16 18:15:25,422 DEBUG [jenkins-hbase4:45445] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-16 18:15:25,422 DEBUG [jenkins-hbase4:45445] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-16 18:15:25,422 DEBUG [jenkins-hbase4:45445] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-16 18:15:25,422 DEBUG [jenkins-hbase4:45445] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-16 18:15:25,427 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,33809,1689531323219, state=OPENING 2023-07-16 18:15:25,436 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-07-16 18:15:25,439 DEBUG [Listener at localhost/38073-EventThread] zookeeper.ZKWatcher(600): master:45445-0x1016f588ace0000, quorum=127.0.0.1:53498, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-16 18:15:25,440 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-16 18:15:25,444 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,33809,1689531323219}] 2023-07-16 18:15:25,460 INFO [RS:0;jenkins-hbase4:33809] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C33809%2C1689531323219, suffix=, logDir=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/WALs/jenkins-hbase4.apache.org,33809,1689531323219, archiveDir=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/oldWALs, maxLogs=32 2023-07-16 18:15:25,461 INFO [RS:2;jenkins-hbase4:41927] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C41927%2C1689531323590, suffix=, logDir=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/WALs/jenkins-hbase4.apache.org,41927,1689531323590, archiveDir=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/oldWALs, maxLogs=32 2023-07-16 18:15:25,467 INFO [RS:1;jenkins-hbase4:43375] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C43375%2C1689531323422, suffix=, logDir=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/WALs/jenkins-hbase4.apache.org,43375,1689531323422, archiveDir=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/oldWALs, maxLogs=32 2023-07-16 18:15:25,500 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42677,DS-26280529-f724-4ea2-95a9-ef1b4940f60b,DISK] 2023-07-16 18:15:25,521 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42677,DS-26280529-f724-4ea2-95a9-ef1b4940f60b,DISK] 2023-07-16 18:15:25,528 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36769,DS-6ab893e3-d584-4af5-92b1-09319086e884,DISK] 2023-07-16 18:15:25,531 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37737,DS-918fb536-8f98-483c-9977-d31c79d2a9f3,DISK] 2023-07-16 18:15:25,531 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42677,DS-26280529-f724-4ea2-95a9-ef1b4940f60b,DISK] 2023-07-16 18:15:25,531 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37737,DS-918fb536-8f98-483c-9977-d31c79d2a9f3,DISK] 2023-07-16 18:15:25,531 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37737,DS-918fb536-8f98-483c-9977-d31c79d2a9f3,DISK] 2023-07-16 18:15:25,532 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36769,DS-6ab893e3-d584-4af5-92b1-09319086e884,DISK] 2023-07-16 18:15:25,532 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36769,DS-6ab893e3-d584-4af5-92b1-09319086e884,DISK] 2023-07-16 18:15:25,541 INFO [RS:2;jenkins-hbase4:41927] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/WALs/jenkins-hbase4.apache.org,41927,1689531323590/jenkins-hbase4.apache.org%2C41927%2C1689531323590.1689531325466 2023-07-16 18:15:25,541 INFO [RS:0;jenkins-hbase4:33809] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/WALs/jenkins-hbase4.apache.org,33809,1689531323219/jenkins-hbase4.apache.org%2C33809%2C1689531323219.1689531325466 2023-07-16 18:15:25,541 DEBUG [RS:2;jenkins-hbase4:41927] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:42677,DS-26280529-f724-4ea2-95a9-ef1b4940f60b,DISK], DatanodeInfoWithStorage[127.0.0.1:37737,DS-918fb536-8f98-483c-9977-d31c79d2a9f3,DISK], DatanodeInfoWithStorage[127.0.0.1:36769,DS-6ab893e3-d584-4af5-92b1-09319086e884,DISK]] 2023-07-16 18:15:25,541 DEBUG [RS:0;jenkins-hbase4:33809] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:36769,DS-6ab893e3-d584-4af5-92b1-09319086e884,DISK], DatanodeInfoWithStorage[127.0.0.1:37737,DS-918fb536-8f98-483c-9977-d31c79d2a9f3,DISK], DatanodeInfoWithStorage[127.0.0.1:42677,DS-26280529-f724-4ea2-95a9-ef1b4940f60b,DISK]] 2023-07-16 18:15:25,543 INFO [RS:1;jenkins-hbase4:43375] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/WALs/jenkins-hbase4.apache.org,43375,1689531323422/jenkins-hbase4.apache.org%2C43375%2C1689531323422.1689531325470 2023-07-16 18:15:25,543 DEBUG [RS:1;jenkins-hbase4:43375] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:42677,DS-26280529-f724-4ea2-95a9-ef1b4940f60b,DISK], DatanodeInfoWithStorage[127.0.0.1:37737,DS-918fb536-8f98-483c-9977-d31c79d2a9f3,DISK], DatanodeInfoWithStorage[127.0.0.1:36769,DS-6ab893e3-d584-4af5-92b1-09319086e884,DISK]] 2023-07-16 18:15:25,599 WARN [ReadOnlyZKClient-127.0.0.1:53498@0x4c0e5421] client.ZKConnectionRegistry(168): Meta region is in state OPENING 2023-07-16 18:15:25,633 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,45445,1689531321197] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-16 18:15:25,639 INFO [RS-EventLoopGroup-3-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:51622, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-16 18:15:25,640 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=33809] ipc.CallRunner(144): callId: 0 service: ClientService methodName: Get size: 88 connection: 172.31.14.131:51622 deadline: 1689531385639, exception=org.apache.hadoop.hbase.NotServingRegionException: hbase:meta,,1 is not online on jenkins-hbase4.apache.org,33809,1689531323219 2023-07-16 18:15:25,650 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,33809,1689531323219 2023-07-16 18:15:25,654 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-16 18:15:25,659 INFO [RS-EventLoopGroup-3-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:51634, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-16 18:15:25,674 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-16 18:15:25,675 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-16 18:15:25,678 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C33809%2C1689531323219.meta, suffix=.meta, logDir=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/WALs/jenkins-hbase4.apache.org,33809,1689531323219, archiveDir=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/oldWALs, maxLogs=32 2023-07-16 18:15:25,695 DEBUG [RS-EventLoopGroup-5-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36769,DS-6ab893e3-d584-4af5-92b1-09319086e884,DISK] 2023-07-16 18:15:25,695 DEBUG [RS-EventLoopGroup-5-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42677,DS-26280529-f724-4ea2-95a9-ef1b4940f60b,DISK] 2023-07-16 18:15:25,696 DEBUG [RS-EventLoopGroup-5-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37737,DS-918fb536-8f98-483c-9977-d31c79d2a9f3,DISK] 2023-07-16 18:15:25,701 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/WALs/jenkins-hbase4.apache.org,33809,1689531323219/jenkins-hbase4.apache.org%2C33809%2C1689531323219.meta.1689531325679.meta 2023-07-16 18:15:25,702 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:36769,DS-6ab893e3-d584-4af5-92b1-09319086e884,DISK], DatanodeInfoWithStorage[127.0.0.1:42677,DS-26280529-f724-4ea2-95a9-ef1b4940f60b,DISK], DatanodeInfoWithStorage[127.0.0.1:37737,DS-918fb536-8f98-483c-9977-d31c79d2a9f3,DISK]] 2023-07-16 18:15:25,702 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-16 18:15:25,704 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-16 18:15:25,707 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-16 18:15:25,709 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-16 18:15:25,715 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-16 18:15:25,715 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 18:15:25,715 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-16 18:15:25,715 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-16 18:15:25,718 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-16 18:15:25,719 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/hbase/meta/1588230740/info 2023-07-16 18:15:25,719 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/hbase/meta/1588230740/info 2023-07-16 18:15:25,720 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-16 18:15:25,721 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 18:15:25,721 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-16 18:15:25,722 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/hbase/meta/1588230740/rep_barrier 2023-07-16 18:15:25,722 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/hbase/meta/1588230740/rep_barrier 2023-07-16 18:15:25,723 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-16 18:15:25,724 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 18:15:25,724 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-16 18:15:25,726 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/hbase/meta/1588230740/table 2023-07-16 18:15:25,726 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/hbase/meta/1588230740/table 2023-07-16 18:15:25,726 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-16 18:15:25,727 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 18:15:25,731 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/hbase/meta/1588230740 2023-07-16 18:15:25,738 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/hbase/meta/1588230740 2023-07-16 18:15:25,742 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-16 18:15:25,746 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-16 18:15:25,748 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10514231360, jitterRate=-0.020785897970199585}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-16 18:15:25,748 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-16 18:15:25,761 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1689531325645 2023-07-16 18:15:25,788 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-16 18:15:25,789 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,33809,1689531323219, state=OPEN 2023-07-16 18:15:25,789 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-16 18:15:25,792 DEBUG [Listener at localhost/38073-EventThread] zookeeper.ZKWatcher(600): master:45445-0x1016f588ace0000, quorum=127.0.0.1:53498, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-16 18:15:25,793 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-16 18:15:25,799 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-07-16 18:15:25,799 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,33809,1689531323219 in 348 msec 2023-07-16 18:15:25,806 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-07-16 18:15:25,806 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 582 msec 2023-07-16 18:15:25,813 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 967 msec 2023-07-16 18:15:25,813 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1689531325813, completionTime=-1 2023-07-16 18:15:25,813 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=0ms, expected min=3 server(s), max=3 server(s), master is running 2023-07-16 18:15:25,813 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-16 18:15:25,885 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-16 18:15:25,885 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1689531385885 2023-07-16 18:15:25,885 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1689531445885 2023-07-16 18:15:25,885 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 71 msec 2023-07-16 18:15:25,909 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,45445,1689531321197-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-16 18:15:25,909 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,45445,1689531321197-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-16 18:15:25,909 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,45445,1689531321197-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-16 18:15:25,912 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:45445, period=300000, unit=MILLISECONDS is enabled. 2023-07-16 18:15:25,913 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-16 18:15:25,922 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-07-16 18:15:25,935 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-07-16 18:15:25,938 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-16 18:15:25,954 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-07-16 18:15:25,958 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-07-16 18:15:25,962 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-16 18:15:25,983 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/hbase/namespace/583941d24df0f42b80730ed46c98845b 2023-07-16 18:15:25,990 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/hbase/namespace/583941d24df0f42b80730ed46c98845b empty. 2023-07-16 18:15:25,991 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/hbase/namespace/583941d24df0f42b80730ed46c98845b 2023-07-16 18:15:25,991 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-07-16 18:15:26,061 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-07-16 18:15:26,065 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 583941d24df0f42b80730ed46c98845b, NAME => 'hbase:namespace,,1689531325937.583941d24df0f42b80730ed46c98845b.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp 2023-07-16 18:15:26,117 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689531325937.583941d24df0f42b80730ed46c98845b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 18:15:26,118 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 583941d24df0f42b80730ed46c98845b, disabling compactions & flushes 2023-07-16 18:15:26,118 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689531325937.583941d24df0f42b80730ed46c98845b. 2023-07-16 18:15:26,118 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689531325937.583941d24df0f42b80730ed46c98845b. 2023-07-16 18:15:26,118 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689531325937.583941d24df0f42b80730ed46c98845b. after waiting 0 ms 2023-07-16 18:15:26,118 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689531325937.583941d24df0f42b80730ed46c98845b. 2023-07-16 18:15:26,118 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689531325937.583941d24df0f42b80730ed46c98845b. 2023-07-16 18:15:26,118 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 583941d24df0f42b80730ed46c98845b: 2023-07-16 18:15:26,124 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-07-16 18:15:26,146 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1689531325937.583941d24df0f42b80730ed46c98845b.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689531326131"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689531326131"}]},"ts":"1689531326131"} 2023-07-16 18:15:26,157 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,45445,1689531321197] master.HMaster(2148): Client=null/null create 'hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-16 18:15:26,160 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,45445,1689531321197] procedure2.ProcedureExecutor(1029): Stored pid=5, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-16 18:15:26,164 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-16 18:15:26,166 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-16 18:15:26,173 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/hbase/rsgroup/dc4034c470728512f24450a6af763b38 2023-07-16 18:15:26,174 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/hbase/rsgroup/dc4034c470728512f24450a6af763b38 empty. 2023-07-16 18:15:26,176 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/hbase/rsgroup/dc4034c470728512f24450a6af763b38 2023-07-16 18:15:26,176 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived hbase:rsgroup regions 2023-07-16 18:15:26,186 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-16 18:15:26,189 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-16 18:15:26,205 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689531326189"}]},"ts":"1689531326189"} 2023-07-16 18:15:26,214 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-07-16 18:15:26,218 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/hbase/rsgroup/.tabledesc/.tableinfo.0000000001 2023-07-16 18:15:26,224 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-16 18:15:26,224 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => dc4034c470728512f24450a6af763b38, NAME => 'hbase:rsgroup,,1689531326156.dc4034c470728512f24450a6af763b38.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp 2023-07-16 18:15:26,224 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-16 18:15:26,224 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-16 18:15:26,224 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-16 18:15:26,224 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-16 18:15:26,226 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=583941d24df0f42b80730ed46c98845b, ASSIGN}] 2023-07-16 18:15:26,243 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=6, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=583941d24df0f42b80730ed46c98845b, ASSIGN 2023-07-16 18:15:26,247 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=6, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=583941d24df0f42b80730ed46c98845b, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,43375,1689531323422; forceNewPlan=false, retain=false 2023-07-16 18:15:26,269 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689531326156.dc4034c470728512f24450a6af763b38.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 18:15:26,269 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1604): Closing dc4034c470728512f24450a6af763b38, disabling compactions & flushes 2023-07-16 18:15:26,269 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689531326156.dc4034c470728512f24450a6af763b38. 2023-07-16 18:15:26,269 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689531326156.dc4034c470728512f24450a6af763b38. 2023-07-16 18:15:26,269 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689531326156.dc4034c470728512f24450a6af763b38. after waiting 0 ms 2023-07-16 18:15:26,270 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689531326156.dc4034c470728512f24450a6af763b38. 2023-07-16 18:15:26,270 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689531326156.dc4034c470728512f24450a6af763b38. 2023-07-16 18:15:26,270 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1558): Region close journal for dc4034c470728512f24450a6af763b38: 2023-07-16 18:15:26,276 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-16 18:15:26,277 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689531326156.dc4034c470728512f24450a6af763b38.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689531326277"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689531326277"}]},"ts":"1689531326277"} 2023-07-16 18:15:26,281 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-16 18:15:26,283 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-16 18:15:26,283 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689531326283"}]},"ts":"1689531326283"} 2023-07-16 18:15:26,289 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLING in hbase:meta 2023-07-16 18:15:26,293 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-16 18:15:26,294 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-16 18:15:26,294 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-16 18:15:26,294 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-16 18:15:26,294 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-16 18:15:26,294 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=7, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=dc4034c470728512f24450a6af763b38, ASSIGN}] 2023-07-16 18:15:26,301 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=7, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=dc4034c470728512f24450a6af763b38, ASSIGN 2023-07-16 18:15:26,303 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=7, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=dc4034c470728512f24450a6af763b38, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,41927,1689531323590; forceNewPlan=false, retain=false 2023-07-16 18:15:26,304 INFO [jenkins-hbase4:45445] balancer.BaseLoadBalancer(1545): Reassigned 2 regions. 2 retained the pre-restart assignment. 2023-07-16 18:15:26,306 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=6 updating hbase:meta row=583941d24df0f42b80730ed46c98845b, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43375,1689531323422 2023-07-16 18:15:26,306 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=dc4034c470728512f24450a6af763b38, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41927,1689531323590 2023-07-16 18:15:26,307 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689531325937.583941d24df0f42b80730ed46c98845b.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689531326306"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689531326306"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689531326306"}]},"ts":"1689531326306"} 2023-07-16 18:15:26,307 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689531326156.dc4034c470728512f24450a6af763b38.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689531326306"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689531326306"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689531326306"}]},"ts":"1689531326306"} 2023-07-16 18:15:26,311 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=8, ppid=6, state=RUNNABLE; OpenRegionProcedure 583941d24df0f42b80730ed46c98845b, server=jenkins-hbase4.apache.org,43375,1689531323422}] 2023-07-16 18:15:26,314 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=9, ppid=7, state=RUNNABLE; OpenRegionProcedure dc4034c470728512f24450a6af763b38, server=jenkins-hbase4.apache.org,41927,1689531323590}] 2023-07-16 18:15:26,466 DEBUG [RSProcedureDispatcher-pool-1] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,43375,1689531323422 2023-07-16 18:15:26,467 DEBUG [RSProcedureDispatcher-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-16 18:15:26,468 DEBUG [RSProcedureDispatcher-pool-2] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,41927,1689531323590 2023-07-16 18:15:26,468 DEBUG [RSProcedureDispatcher-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-16 18:15:26,472 INFO [RS-EventLoopGroup-4-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:51410, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-16 18:15:26,472 INFO [RS-EventLoopGroup-5-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:48550, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-16 18:15:26,481 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689531326156.dc4034c470728512f24450a6af763b38. 2023-07-16 18:15:26,482 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => dc4034c470728512f24450a6af763b38, NAME => 'hbase:rsgroup,,1689531326156.dc4034c470728512f24450a6af763b38.', STARTKEY => '', ENDKEY => ''} 2023-07-16 18:15:26,482 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689531325937.583941d24df0f42b80730ed46c98845b. 2023-07-16 18:15:26,482 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-16 18:15:26,482 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 583941d24df0f42b80730ed46c98845b, NAME => 'hbase:namespace,,1689531325937.583941d24df0f42b80730ed46c98845b.', STARTKEY => '', ENDKEY => ''} 2023-07-16 18:15:26,482 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689531326156.dc4034c470728512f24450a6af763b38. service=MultiRowMutationService 2023-07-16 18:15:26,483 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-16 18:15:26,484 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 583941d24df0f42b80730ed46c98845b 2023-07-16 18:15:26,484 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup dc4034c470728512f24450a6af763b38 2023-07-16 18:15:26,484 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689531325937.583941d24df0f42b80730ed46c98845b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 18:15:26,484 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689531326156.dc4034c470728512f24450a6af763b38.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 18:15:26,484 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 583941d24df0f42b80730ed46c98845b 2023-07-16 18:15:26,484 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for dc4034c470728512f24450a6af763b38 2023-07-16 18:15:26,484 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 583941d24df0f42b80730ed46c98845b 2023-07-16 18:15:26,484 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for dc4034c470728512f24450a6af763b38 2023-07-16 18:15:26,492 INFO [StoreOpener-dc4034c470728512f24450a6af763b38-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region dc4034c470728512f24450a6af763b38 2023-07-16 18:15:26,492 INFO [StoreOpener-583941d24df0f42b80730ed46c98845b-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 583941d24df0f42b80730ed46c98845b 2023-07-16 18:15:26,495 DEBUG [StoreOpener-583941d24df0f42b80730ed46c98845b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/hbase/namespace/583941d24df0f42b80730ed46c98845b/info 2023-07-16 18:15:26,495 DEBUG [StoreOpener-583941d24df0f42b80730ed46c98845b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/hbase/namespace/583941d24df0f42b80730ed46c98845b/info 2023-07-16 18:15:26,495 DEBUG [StoreOpener-dc4034c470728512f24450a6af763b38-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/hbase/rsgroup/dc4034c470728512f24450a6af763b38/m 2023-07-16 18:15:26,495 DEBUG [StoreOpener-dc4034c470728512f24450a6af763b38-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/hbase/rsgroup/dc4034c470728512f24450a6af763b38/m 2023-07-16 18:15:26,496 INFO [StoreOpener-dc4034c470728512f24450a6af763b38-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region dc4034c470728512f24450a6af763b38 columnFamilyName m 2023-07-16 18:15:26,496 INFO [StoreOpener-583941d24df0f42b80730ed46c98845b-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 583941d24df0f42b80730ed46c98845b columnFamilyName info 2023-07-16 18:15:26,497 INFO [StoreOpener-dc4034c470728512f24450a6af763b38-1] regionserver.HStore(310): Store=dc4034c470728512f24450a6af763b38/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 18:15:26,497 INFO [StoreOpener-583941d24df0f42b80730ed46c98845b-1] regionserver.HStore(310): Store=583941d24df0f42b80730ed46c98845b/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 18:15:26,501 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/hbase/rsgroup/dc4034c470728512f24450a6af763b38 2023-07-16 18:15:26,505 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/hbase/namespace/583941d24df0f42b80730ed46c98845b 2023-07-16 18:15:26,506 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/hbase/namespace/583941d24df0f42b80730ed46c98845b 2023-07-16 18:15:26,506 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/hbase/rsgroup/dc4034c470728512f24450a6af763b38 2023-07-16 18:15:26,511 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 583941d24df0f42b80730ed46c98845b 2023-07-16 18:15:26,513 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for dc4034c470728512f24450a6af763b38 2023-07-16 18:15:26,516 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/hbase/namespace/583941d24df0f42b80730ed46c98845b/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-16 18:15:26,517 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 583941d24df0f42b80730ed46c98845b; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9521415520, jitterRate=-0.11324907839298248}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 18:15:26,517 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 583941d24df0f42b80730ed46c98845b: 2023-07-16 18:15:26,520 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/hbase/rsgroup/dc4034c470728512f24450a6af763b38/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-16 18:15:26,521 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened dc4034c470728512f24450a6af763b38; next sequenceid=2; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@2b6c9b20, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 18:15:26,521 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for dc4034c470728512f24450a6af763b38: 2023-07-16 18:15:26,523 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689531326156.dc4034c470728512f24450a6af763b38., pid=9, masterSystemTime=1689531326468 2023-07-16 18:15:26,528 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689531325937.583941d24df0f42b80730ed46c98845b., pid=8, masterSystemTime=1689531326466 2023-07-16 18:15:26,534 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=dc4034c470728512f24450a6af763b38, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,41927,1689531323590 2023-07-16 18:15:26,534 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689531326156.dc4034c470728512f24450a6af763b38. 2023-07-16 18:15:26,535 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689531326156.dc4034c470728512f24450a6af763b38. 2023-07-16 18:15:26,535 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689531326156.dc4034c470728512f24450a6af763b38.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689531326533"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689531326533"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689531326533"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689531326533"}]},"ts":"1689531326533"} 2023-07-16 18:15:26,535 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689531325937.583941d24df0f42b80730ed46c98845b. 2023-07-16 18:15:26,535 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689531325937.583941d24df0f42b80730ed46c98845b. 2023-07-16 18:15:26,537 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=6 updating hbase:meta row=583941d24df0f42b80730ed46c98845b, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,43375,1689531323422 2023-07-16 18:15:26,539 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689531325937.583941d24df0f42b80730ed46c98845b.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689531326537"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689531326537"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689531326537"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689531326537"}]},"ts":"1689531326537"} 2023-07-16 18:15:26,547 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=9, resume processing ppid=7 2023-07-16 18:15:26,548 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=9, ppid=7, state=SUCCESS; OpenRegionProcedure dc4034c470728512f24450a6af763b38, server=jenkins-hbase4.apache.org,41927,1689531323590 in 228 msec 2023-07-16 18:15:26,549 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=8, resume processing ppid=6 2023-07-16 18:15:26,550 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=8, ppid=6, state=SUCCESS; OpenRegionProcedure 583941d24df0f42b80730ed46c98845b, server=jenkins-hbase4.apache.org,43375,1689531323422 in 233 msec 2023-07-16 18:15:26,565 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=7, resume processing ppid=5 2023-07-16 18:15:26,565 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=7, ppid=5, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=dc4034c470728512f24450a6af763b38, ASSIGN in 253 msec 2023-07-16 18:15:26,565 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=4 2023-07-16 18:15:26,566 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=583941d24df0f42b80730ed46c98845b, ASSIGN in 324 msec 2023-07-16 18:15:26,567 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-16 18:15:26,567 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689531326567"}]},"ts":"1689531326567"} 2023-07-16 18:15:26,567 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-16 18:15:26,568 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689531326568"}]},"ts":"1689531326568"} 2023-07-16 18:15:26,572 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLED in hbase:meta 2023-07-16 18:15:26,573 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-07-16 18:15:26,579 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-16 18:15:26,581 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-07-16 18:15:26,592 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=5, state=SUCCESS; CreateTableProcedure table=hbase:rsgroup in 423 msec 2023-07-16 18:15:26,593 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 642 msec 2023-07-16 18:15:26,658 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:45445-0x1016f588ace0000, quorum=127.0.0.1:53498, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-07-16 18:15:26,660 DEBUG [Listener at localhost/38073-EventThread] zookeeper.ZKWatcher(600): master:45445-0x1016f588ace0000, quorum=127.0.0.1:53498, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-07-16 18:15:26,660 DEBUG [Listener at localhost/38073-EventThread] zookeeper.ZKWatcher(600): master:45445-0x1016f588ace0000, quorum=127.0.0.1:53498, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-16 18:15:26,692 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-16 18:15:26,696 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,45445,1689531321197] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-16 18:15:26,700 INFO [RS-EventLoopGroup-5-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:48562, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-16 18:15:26,705 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,45445,1689531321197] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(840): RSGroup table=hbase:rsgroup is online, refreshing cached information 2023-07-16 18:15:26,705 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,45445,1689531321197] rsgroup.RSGroupInfoManagerImpl(534): Refreshing in Online mode. 2023-07-16 18:15:26,708 INFO [RS-EventLoopGroup-4-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:51414, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-16 18:15:26,743 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=10, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-07-16 18:15:26,763 DEBUG [Listener at localhost/38073-EventThread] zookeeper.ZKWatcher(600): master:45445-0x1016f588ace0000, quorum=127.0.0.1:53498, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-16 18:15:26,772 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=10, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 45 msec 2023-07-16 18:15:26,776 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=11, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-16 18:15:26,790 DEBUG [Listener at localhost/38073-EventThread] zookeeper.ZKWatcher(600): master:45445-0x1016f588ace0000, quorum=127.0.0.1:53498, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-16 18:15:26,797 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=11, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 19 msec 2023-07-16 18:15:26,811 DEBUG [Listener at localhost/38073-EventThread] zookeeper.ZKWatcher(600): master:45445-0x1016f588ace0000, quorum=127.0.0.1:53498, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-16 18:15:26,811 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,45445,1689531321197] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 18:15:26,819 DEBUG [Listener at localhost/38073-EventThread] zookeeper.ZKWatcher(600): master:45445-0x1016f588ace0000, quorum=127.0.0.1:53498, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-16 18:15:26,820 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,45445,1689531321197] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-16 18:15:26,823 DEBUG [Listener at localhost/38073-EventThread] zookeeper.ZKWatcher(600): master:45445-0x1016f588ace0000, quorum=127.0.0.1:53498, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-16 18:15:26,824 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 3.028sec 2023-07-16 18:15:26,826 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-07-16 18:15:26,829 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-16 18:15:26,829 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-16 18:15:26,829 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,45445,1689531321197] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(823): GroupBasedLoadBalancer is now online 2023-07-16 18:15:26,831 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,45445,1689531321197-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-16 18:15:26,832 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,45445,1689531321197-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-16 18:15:26,842 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-16 18:15:26,858 DEBUG [Listener at localhost/38073] zookeeper.ReadOnlyZKClient(139): Connect 0x2c40ba5c to 127.0.0.1:53498 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-16 18:15:26,878 DEBUG [Listener at localhost/38073] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2ef2a8b9, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-16 18:15:26,905 DEBUG [hconnection-0x1a923ff9-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-16 18:15:26,920 INFO [RS-EventLoopGroup-3-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:51650, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-16 18:15:26,934 INFO [Listener at localhost/38073] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,45445,1689531321197 2023-07-16 18:15:26,936 INFO [Listener at localhost/38073] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 18:15:26,948 DEBUG [Listener at localhost/38073] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-16 18:15:26,953 INFO [RS-EventLoopGroup-1-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:45244, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-16 18:15:26,969 DEBUG [Listener at localhost/38073-EventThread] zookeeper.ZKWatcher(600): master:45445-0x1016f588ace0000, quorum=127.0.0.1:53498, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-07-16 18:15:26,969 DEBUG [Listener at localhost/38073-EventThread] zookeeper.ZKWatcher(600): master:45445-0x1016f588ace0000, quorum=127.0.0.1:53498, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-16 18:15:26,970 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=false 2023-07-16 18:15:26,976 DEBUG [Listener at localhost/38073] zookeeper.ReadOnlyZKClient(139): Connect 0x724df952 to 127.0.0.1:53498 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-16 18:15:26,982 DEBUG [Listener at localhost/38073] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@52ff7963, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-16 18:15:26,982 INFO [Listener at localhost/38073] zookeeper.RecoverableZooKeeper(93): Process identifier=VerifyingRSGroupAdminClient connecting to ZooKeeper ensemble=127.0.0.1:53498 2023-07-16 18:15:26,985 DEBUG [Listener at localhost/38073-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient0x0, quorum=127.0.0.1:53498, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-16 18:15:26,986 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): VerifyingRSGroupAdminClient-0x1016f588ace000a connected 2023-07-16 18:15:27,019 INFO [Listener at localhost/38073] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testTableMoveTruncateAndDrop Thread=423, OpenFileDescriptor=679, MaxFileDescriptor=60000, SystemLoadAverage=413, ProcessCount=173, AvailableMemoryMB=3771 2023-07-16 18:15:27,022 INFO [Listener at localhost/38073] rsgroup.TestRSGroupsBase(132): testTableMoveTruncateAndDrop 2023-07-16 18:15:27,046 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 18:15:27,047 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 18:15:27,095 INFO [Listener at localhost/38073] rsgroup.TestRSGroupsBase(152): Restoring servers: 1 2023-07-16 18:15:27,108 INFO [Listener at localhost/38073] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-16 18:15:27,108 INFO [Listener at localhost/38073] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-16 18:15:27,109 INFO [Listener at localhost/38073] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-16 18:15:27,109 INFO [Listener at localhost/38073] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-16 18:15:27,109 INFO [Listener at localhost/38073] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-16 18:15:27,109 INFO [Listener at localhost/38073] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-16 18:15:27,109 INFO [Listener at localhost/38073] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-16 18:15:27,114 INFO [Listener at localhost/38073] ipc.NettyRpcServer(120): Bind to /172.31.14.131:44563 2023-07-16 18:15:27,115 INFO [Listener at localhost/38073] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-16 18:15:27,117 DEBUG [Listener at localhost/38073] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-16 18:15:27,119 INFO [Listener at localhost/38073] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-16 18:15:27,123 INFO [Listener at localhost/38073] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-16 18:15:27,127 INFO [Listener at localhost/38073] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:44563 connecting to ZooKeeper ensemble=127.0.0.1:53498 2023-07-16 18:15:27,135 DEBUG [Listener at localhost/38073-EventThread] zookeeper.ZKWatcher(600): regionserver:445630x0, quorum=127.0.0.1:53498, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-16 18:15:27,136 DEBUG [Listener at localhost/38073] zookeeper.ZKUtil(162): regionserver:445630x0, quorum=127.0.0.1:53498, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-16 18:15:27,138 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:44563-0x1016f588ace000b connected 2023-07-16 18:15:27,138 DEBUG [Listener at localhost/38073] zookeeper.ZKUtil(162): regionserver:44563-0x1016f588ace000b, quorum=127.0.0.1:53498, baseZNode=/hbase Set watcher on existing znode=/hbase/running 2023-07-16 18:15:27,139 DEBUG [Listener at localhost/38073] zookeeper.ZKUtil(164): regionserver:44563-0x1016f588ace000b, quorum=127.0.0.1:53498, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-16 18:15:27,141 DEBUG [Listener at localhost/38073] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=44563 2023-07-16 18:15:27,146 DEBUG [Listener at localhost/38073] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=44563 2023-07-16 18:15:27,147 DEBUG [Listener at localhost/38073] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=44563 2023-07-16 18:15:27,147 DEBUG [Listener at localhost/38073] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=44563 2023-07-16 18:15:27,148 DEBUG [Listener at localhost/38073] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=44563 2023-07-16 18:15:27,150 INFO [Listener at localhost/38073] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-16 18:15:27,151 INFO [Listener at localhost/38073] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-16 18:15:27,151 INFO [Listener at localhost/38073] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-16 18:15:27,152 INFO [Listener at localhost/38073] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-16 18:15:27,152 INFO [Listener at localhost/38073] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-16 18:15:27,152 INFO [Listener at localhost/38073] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-16 18:15:27,152 INFO [Listener at localhost/38073] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-16 18:15:27,153 INFO [Listener at localhost/38073] http.HttpServer(1146): Jetty bound to port 45161 2023-07-16 18:15:27,153 INFO [Listener at localhost/38073] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-16 18:15:27,160 INFO [Listener at localhost/38073] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 18:15:27,160 INFO [Listener at localhost/38073] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@20df6e9f{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1ac4d196-f4be-3944-d331-ea30eb493ba6/hadoop.log.dir/,AVAILABLE} 2023-07-16 18:15:27,161 INFO [Listener at localhost/38073] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 18:15:27,161 INFO [Listener at localhost/38073] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@700b9517{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-16 18:15:27,300 INFO [Listener at localhost/38073] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-16 18:15:27,301 INFO [Listener at localhost/38073] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-16 18:15:27,302 INFO [Listener at localhost/38073] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-16 18:15:27,302 INFO [Listener at localhost/38073] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-16 18:15:27,303 INFO [Listener at localhost/38073] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 18:15:27,305 INFO [Listener at localhost/38073] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@4e85f979{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1ac4d196-f4be-3944-d331-ea30eb493ba6/java.io.tmpdir/jetty-0_0_0_0-45161-hbase-server-2_4_18-SNAPSHOT_jar-_-any-3654377853561167261/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-16 18:15:27,307 INFO [Listener at localhost/38073] server.AbstractConnector(333): Started ServerConnector@4fd2dab2{HTTP/1.1, (http/1.1)}{0.0.0.0:45161} 2023-07-16 18:15:27,308 INFO [Listener at localhost/38073] server.Server(415): Started @12087ms 2023-07-16 18:15:27,311 INFO [RS:3;jenkins-hbase4:44563] regionserver.HRegionServer(951): ClusterId : 750c298c-5826-454f-bacc-8ea8fc4f680a 2023-07-16 18:15:27,312 DEBUG [RS:3;jenkins-hbase4:44563] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-16 18:15:27,320 DEBUG [RS:3;jenkins-hbase4:44563] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-16 18:15:27,320 DEBUG [RS:3;jenkins-hbase4:44563] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-16 18:15:27,323 DEBUG [RS:3;jenkins-hbase4:44563] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-16 18:15:27,327 DEBUG [RS:3;jenkins-hbase4:44563] zookeeper.ReadOnlyZKClient(139): Connect 0x03688c79 to 127.0.0.1:53498 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-16 18:15:27,332 DEBUG [RS:3;jenkins-hbase4:44563] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@daf7264, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-16 18:15:27,332 DEBUG [RS:3;jenkins-hbase4:44563] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@990e709, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-16 18:15:27,342 DEBUG [RS:3;jenkins-hbase4:44563] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:3;jenkins-hbase4:44563 2023-07-16 18:15:27,342 INFO [RS:3;jenkins-hbase4:44563] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-16 18:15:27,342 INFO [RS:3;jenkins-hbase4:44563] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-16 18:15:27,342 DEBUG [RS:3;jenkins-hbase4:44563] regionserver.HRegionServer(1022): About to register with Master. 2023-07-16 18:15:27,342 INFO [RS:3;jenkins-hbase4:44563] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,45445,1689531321197 with isa=jenkins-hbase4.apache.org/172.31.14.131:44563, startcode=1689531327107 2023-07-16 18:15:27,343 DEBUG [RS:3;jenkins-hbase4:44563] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-16 18:15:27,347 INFO [RS-EventLoopGroup-1-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:51559, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.3 (auth:SIMPLE), service=RegionServerStatusService 2023-07-16 18:15:27,347 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=45445] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,44563,1689531327107 2023-07-16 18:15:27,348 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,45445,1689531321197] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-16 18:15:27,348 DEBUG [RS:3;jenkins-hbase4:44563] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92 2023-07-16 18:15:27,348 DEBUG [RS:3;jenkins-hbase4:44563] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:36523 2023-07-16 18:15:27,348 DEBUG [RS:3;jenkins-hbase4:44563] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=34821 2023-07-16 18:15:27,353 DEBUG [Listener at localhost/38073-EventThread] zookeeper.ZKWatcher(600): regionserver:43375-0x1016f588ace0002, quorum=127.0.0.1:53498, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-16 18:15:27,353 DEBUG [Listener at localhost/38073-EventThread] zookeeper.ZKWatcher(600): master:45445-0x1016f588ace0000, quorum=127.0.0.1:53498, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-16 18:15:27,354 DEBUG [Listener at localhost/38073-EventThread] zookeeper.ZKWatcher(600): regionserver:41927-0x1016f588ace0003, quorum=127.0.0.1:53498, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-16 18:15:27,354 DEBUG [Listener at localhost/38073-EventThread] zookeeper.ZKWatcher(600): regionserver:33809-0x1016f588ace0001, quorum=127.0.0.1:53498, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-16 18:15:27,355 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,45445,1689531321197] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 18:15:27,355 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,44563,1689531327107] 2023-07-16 18:15:27,356 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:43375-0x1016f588ace0002, quorum=127.0.0.1:53498, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33809,1689531323219 2023-07-16 18:15:27,356 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,45445,1689531321197] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-16 18:15:27,356 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:33809-0x1016f588ace0001, quorum=127.0.0.1:53498, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33809,1689531323219 2023-07-16 18:15:27,356 DEBUG [RS:3;jenkins-hbase4:44563] zookeeper.ZKUtil(162): regionserver:44563-0x1016f588ace000b, quorum=127.0.0.1:53498, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44563,1689531327107 2023-07-16 18:15:27,356 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41927-0x1016f588ace0003, quorum=127.0.0.1:53498, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33809,1689531323219 2023-07-16 18:15:27,356 WARN [RS:3;jenkins-hbase4:44563] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-16 18:15:27,356 INFO [RS:3;jenkins-hbase4:44563] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-16 18:15:27,356 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:43375-0x1016f588ace0002, quorum=127.0.0.1:53498, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43375,1689531323422 2023-07-16 18:15:27,357 DEBUG [RS:3;jenkins-hbase4:44563] regionserver.HRegionServer(1948): logDir=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/WALs/jenkins-hbase4.apache.org,44563,1689531327107 2023-07-16 18:15:27,364 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41927-0x1016f588ace0003, quorum=127.0.0.1:53498, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43375,1689531323422 2023-07-16 18:15:27,365 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,45445,1689531321197] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 4 2023-07-16 18:15:27,365 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:33809-0x1016f588ace0001, quorum=127.0.0.1:53498, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43375,1689531323422 2023-07-16 18:15:27,365 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:43375-0x1016f588ace0002, quorum=127.0.0.1:53498, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44563,1689531327107 2023-07-16 18:15:27,365 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41927-0x1016f588ace0003, quorum=127.0.0.1:53498, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44563,1689531327107 2023-07-16 18:15:27,365 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:33809-0x1016f588ace0001, quorum=127.0.0.1:53498, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44563,1689531327107 2023-07-16 18:15:27,366 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:43375-0x1016f588ace0002, quorum=127.0.0.1:53498, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41927,1689531323590 2023-07-16 18:15:27,367 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:41927-0x1016f588ace0003, quorum=127.0.0.1:53498, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41927,1689531323590 2023-07-16 18:15:27,367 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:33809-0x1016f588ace0001, quorum=127.0.0.1:53498, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41927,1689531323590 2023-07-16 18:15:27,370 DEBUG [RS:3;jenkins-hbase4:44563] zookeeper.ZKUtil(162): regionserver:44563-0x1016f588ace000b, quorum=127.0.0.1:53498, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,33809,1689531323219 2023-07-16 18:15:27,371 DEBUG [RS:3;jenkins-hbase4:44563] zookeeper.ZKUtil(162): regionserver:44563-0x1016f588ace000b, quorum=127.0.0.1:53498, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43375,1689531323422 2023-07-16 18:15:27,374 DEBUG [RS:3;jenkins-hbase4:44563] zookeeper.ZKUtil(162): regionserver:44563-0x1016f588ace000b, quorum=127.0.0.1:53498, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,44563,1689531327107 2023-07-16 18:15:27,374 DEBUG [RS:3;jenkins-hbase4:44563] zookeeper.ZKUtil(162): regionserver:44563-0x1016f588ace000b, quorum=127.0.0.1:53498, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,41927,1689531323590 2023-07-16 18:15:27,379 DEBUG [RS:3;jenkins-hbase4:44563] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-16 18:15:27,379 INFO [RS:3;jenkins-hbase4:44563] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-16 18:15:27,386 INFO [RS:3;jenkins-hbase4:44563] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-16 18:15:27,387 INFO [RS:3;jenkins-hbase4:44563] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-16 18:15:27,387 INFO [RS:3;jenkins-hbase4:44563] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-16 18:15:27,388 INFO [RS:3;jenkins-hbase4:44563] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-16 18:15:27,390 INFO [RS:3;jenkins-hbase4:44563] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-16 18:15:27,390 DEBUG [RS:3;jenkins-hbase4:44563] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 18:15:27,390 DEBUG [RS:3;jenkins-hbase4:44563] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 18:15:27,390 DEBUG [RS:3;jenkins-hbase4:44563] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 18:15:27,390 DEBUG [RS:3;jenkins-hbase4:44563] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 18:15:27,390 DEBUG [RS:3;jenkins-hbase4:44563] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 18:15:27,390 DEBUG [RS:3;jenkins-hbase4:44563] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-16 18:15:27,391 DEBUG [RS:3;jenkins-hbase4:44563] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 18:15:27,391 DEBUG [RS:3;jenkins-hbase4:44563] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 18:15:27,391 DEBUG [RS:3;jenkins-hbase4:44563] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 18:15:27,392 DEBUG [RS:3;jenkins-hbase4:44563] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 18:15:27,394 INFO [RS:3;jenkins-hbase4:44563] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-16 18:15:27,395 INFO [RS:3;jenkins-hbase4:44563] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-16 18:15:27,395 INFO [RS:3;jenkins-hbase4:44563] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-16 18:15:27,414 INFO [RS:3;jenkins-hbase4:44563] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-16 18:15:27,414 INFO [RS:3;jenkins-hbase4:44563] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,44563,1689531327107-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-16 18:15:27,429 INFO [RS:3;jenkins-hbase4:44563] regionserver.Replication(203): jenkins-hbase4.apache.org,44563,1689531327107 started 2023-07-16 18:15:27,430 INFO [RS:3;jenkins-hbase4:44563] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,44563,1689531327107, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:44563, sessionid=0x1016f588ace000b 2023-07-16 18:15:27,430 DEBUG [RS:3;jenkins-hbase4:44563] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-16 18:15:27,430 DEBUG [RS:3;jenkins-hbase4:44563] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,44563,1689531327107 2023-07-16 18:15:27,430 DEBUG [RS:3;jenkins-hbase4:44563] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,44563,1689531327107' 2023-07-16 18:15:27,430 DEBUG [RS:3;jenkins-hbase4:44563] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-16 18:15:27,430 DEBUG [RS:3;jenkins-hbase4:44563] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-16 18:15:27,431 DEBUG [RS:3;jenkins-hbase4:44563] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-16 18:15:27,431 DEBUG [RS:3;jenkins-hbase4:44563] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-16 18:15:27,431 DEBUG [RS:3;jenkins-hbase4:44563] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,44563,1689531327107 2023-07-16 18:15:27,431 DEBUG [RS:3;jenkins-hbase4:44563] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,44563,1689531327107' 2023-07-16 18:15:27,431 DEBUG [RS:3;jenkins-hbase4:44563] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-16 18:15:27,432 DEBUG [RS:3;jenkins-hbase4:44563] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-16 18:15:27,432 DEBUG [RS:3;jenkins-hbase4:44563] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-16 18:15:27,432 INFO [RS:3;jenkins-hbase4:44563] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-16 18:15:27,432 INFO [RS:3;jenkins-hbase4:44563] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-16 18:15:27,436 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-16 18:15:27,441 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 18:15:27,441 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 18:15:27,443 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-16 18:15:27,449 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-16 18:15:27,453 DEBUG [hconnection-0x67375dfc-metaLookup-shared--pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-16 18:15:27,458 INFO [RS-EventLoopGroup-3-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:51652, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-16 18:15:27,464 DEBUG [hconnection-0x67375dfc-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-16 18:15:27,469 INFO [RS-EventLoopGroup-5-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:48576, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-16 18:15:27,473 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 18:15:27,473 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 18:15:27,487 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:45445] to rsgroup master 2023-07-16 18:15:27,488 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45445 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 18:15:27,488 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] ipc.CallRunner(144): callId: 20 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:45244 deadline: 1689532527486, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45445 is either offline or it does not exist. 2023-07-16 18:15:27,489 WARN [Listener at localhost/38073] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45445 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45445 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-16 18:15:27,492 INFO [Listener at localhost/38073] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 18:15:27,493 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 18:15:27,493 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 18:15:27,494 INFO [Listener at localhost/38073] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33809, jenkins-hbase4.apache.org:41927, jenkins-hbase4.apache.org:43375, jenkins-hbase4.apache.org:44563], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-16 18:15:27,502 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-16 18:15:27,502 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 18:15:27,504 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-16 18:15:27,504 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 18:15:27,506 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_testTableMoveTruncateAndDrop_1745006964 2023-07-16 18:15:27,510 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_1745006964 2023-07-16 18:15:27,512 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 18:15:27,513 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 18:15:27,513 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-16 18:15:27,518 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-16 18:15:27,522 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 18:15:27,523 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 18:15:27,526 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:33809, jenkins-hbase4.apache.org:41927] to rsgroup Group_testTableMoveTruncateAndDrop_1745006964 2023-07-16 18:15:27,530 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_1745006964 2023-07-16 18:15:27,530 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 18:15:27,531 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 18:15:27,531 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-16 18:15:27,535 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminServer(238): Moving server region 1588230740, which do not belong to RSGroup Group_testTableMoveTruncateAndDrop_1745006964 2023-07-16 18:15:27,537 INFO [RS:3;jenkins-hbase4:44563] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C44563%2C1689531327107, suffix=, logDir=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/WALs/jenkins-hbase4.apache.org,44563,1689531327107, archiveDir=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/oldWALs, maxLogs=32 2023-07-16 18:15:27,538 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE 2023-07-16 18:15:27,539 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=12, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE 2023-07-16 18:15:27,540 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminServer(238): Moving server region dc4034c470728512f24450a6af763b38, which do not belong to RSGroup Group_testTableMoveTruncateAndDrop_1745006964 2023-07-16 18:15:27,540 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,33809,1689531323219, state=CLOSING 2023-07-16 18:15:27,541 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] procedure2.ProcedureExecutor(1029): Stored pid=13, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:rsgroup, region=dc4034c470728512f24450a6af763b38, REOPEN/MOVE 2023-07-16 18:15:27,541 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminServer(286): Moving 2 region(s) to group default, current retry=0 2023-07-16 18:15:27,542 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=13, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:rsgroup, region=dc4034c470728512f24450a6af763b38, REOPEN/MOVE 2023-07-16 18:15:27,542 DEBUG [Listener at localhost/38073-EventThread] zookeeper.ZKWatcher(600): master:45445-0x1016f588ace0000, quorum=127.0.0.1:53498, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-16 18:15:27,543 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-16 18:15:27,543 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=14, ppid=12, state=RUNNABLE; CloseRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,33809,1689531323219}] 2023-07-16 18:15:27,545 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=dc4034c470728512f24450a6af763b38, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41927,1689531323590 2023-07-16 18:15:27,545 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689531326156.dc4034c470728512f24450a6af763b38.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689531327545"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689531327545"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689531327545"}]},"ts":"1689531327545"} 2023-07-16 18:15:27,548 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=15, ppid=13, state=RUNNABLE; CloseRegionProcedure dc4034c470728512f24450a6af763b38, server=jenkins-hbase4.apache.org,41927,1689531323590}] 2023-07-16 18:15:27,550 DEBUG [PEWorker-3] procedure2.ProcedureExecutor(1400): LOCK_EVENT_WAIT pid=15, ppid=13, state=RUNNABLE; CloseRegionProcedure dc4034c470728512f24450a6af763b38, server=jenkins-hbase4.apache.org,41927,1689531323590 2023-07-16 18:15:27,567 DEBUG [RS-EventLoopGroup-7-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42677,DS-26280529-f724-4ea2-95a9-ef1b4940f60b,DISK] 2023-07-16 18:15:27,568 DEBUG [RS-EventLoopGroup-7-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36769,DS-6ab893e3-d584-4af5-92b1-09319086e884,DISK] 2023-07-16 18:15:27,568 DEBUG [RS-EventLoopGroup-7-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37737,DS-918fb536-8f98-483c-9977-d31c79d2a9f3,DISK] 2023-07-16 18:15:27,576 INFO [RS:3;jenkins-hbase4:44563] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/WALs/jenkins-hbase4.apache.org,44563,1689531327107/jenkins-hbase4.apache.org%2C44563%2C1689531327107.1689531327539 2023-07-16 18:15:27,582 DEBUG [RS:3;jenkins-hbase4:44563] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:42677,DS-26280529-f724-4ea2-95a9-ef1b4940f60b,DISK], DatanodeInfoWithStorage[127.0.0.1:37737,DS-918fb536-8f98-483c-9977-d31c79d2a9f3,DISK], DatanodeInfoWithStorage[127.0.0.1:36769,DS-6ab893e3-d584-4af5-92b1-09319086e884,DISK]] 2023-07-16 18:15:27,707 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 1588230740 2023-07-16 18:15:27,708 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-16 18:15:27,708 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-16 18:15:27,708 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-16 18:15:27,708 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-16 18:15:27,708 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-16 18:15:27,709 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=2.85 KB heapSize=5.58 KB 2023-07-16 18:15:27,832 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=2.67 KB at sequenceid=15 (bloomFilter=false), to=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/hbase/meta/1588230740/.tmp/info/bedee7ce31ae49328e9a2af23b711dee 2023-07-16 18:15:27,910 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=184 B at sequenceid=15 (bloomFilter=false), to=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/hbase/meta/1588230740/.tmp/table/e5fb4755f2c8409aa73616719ce1ddc4 2023-07-16 18:15:27,921 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/hbase/meta/1588230740/.tmp/info/bedee7ce31ae49328e9a2af23b711dee as hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/hbase/meta/1588230740/info/bedee7ce31ae49328e9a2af23b711dee 2023-07-16 18:15:27,934 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/hbase/meta/1588230740/info/bedee7ce31ae49328e9a2af23b711dee, entries=21, sequenceid=15, filesize=7.1 K 2023-07-16 18:15:27,937 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/hbase/meta/1588230740/.tmp/table/e5fb4755f2c8409aa73616719ce1ddc4 as hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/hbase/meta/1588230740/table/e5fb4755f2c8409aa73616719ce1ddc4 2023-07-16 18:15:27,948 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/hbase/meta/1588230740/table/e5fb4755f2c8409aa73616719ce1ddc4, entries=4, sequenceid=15, filesize=4.8 K 2023-07-16 18:15:27,950 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~2.85 KB/2916, heapSize ~5.30 KB/5424, currentSize=0 B/0 for 1588230740 in 241ms, sequenceid=15, compaction requested=false 2023-07-16 18:15:27,952 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-07-16 18:15:27,969 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/hbase/meta/1588230740/recovered.edits/18.seqid, newMaxSeqId=18, maxSeqId=1 2023-07-16 18:15:27,970 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-16 18:15:27,970 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-16 18:15:27,971 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-16 18:15:27,971 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 1588230740 move to jenkins-hbase4.apache.org,44563,1689531327107 record at close sequenceid=15 2023-07-16 18:15:27,975 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 1588230740 2023-07-16 18:15:27,976 WARN [PEWorker-4] zookeeper.MetaTableLocator(225): Tried to set null ServerName in hbase:meta; skipping -- ServerName required 2023-07-16 18:15:27,986 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=14, resume processing ppid=12 2023-07-16 18:15:27,987 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=14, ppid=12, state=SUCCESS; CloseRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,33809,1689531323219 in 433 msec 2023-07-16 18:15:27,988 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,44563,1689531327107; forceNewPlan=false, retain=false 2023-07-16 18:15:28,138 INFO [jenkins-hbase4:45445] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-16 18:15:28,138 INFO [PEWorker-2] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,44563,1689531327107, state=OPENING 2023-07-16 18:15:28,145 DEBUG [Listener at localhost/38073-EventThread] zookeeper.ZKWatcher(600): master:45445-0x1016f588ace0000, quorum=127.0.0.1:53498, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-16 18:15:28,145 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=16, ppid=12, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,44563,1689531327107}] 2023-07-16 18:15:28,145 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-16 18:15:28,305 DEBUG [RSProcedureDispatcher-pool-2] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,44563,1689531327107 2023-07-16 18:15:28,306 DEBUG [RSProcedureDispatcher-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-16 18:15:28,309 INFO [RS-EventLoopGroup-7-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:58402, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-16 18:15:28,315 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-16 18:15:28,315 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-16 18:15:28,318 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C44563%2C1689531327107.meta, suffix=.meta, logDir=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/WALs/jenkins-hbase4.apache.org,44563,1689531327107, archiveDir=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/oldWALs, maxLogs=32 2023-07-16 18:15:28,352 DEBUG [RS-EventLoopGroup-7-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:36769,DS-6ab893e3-d584-4af5-92b1-09319086e884,DISK] 2023-07-16 18:15:28,352 DEBUG [RS-EventLoopGroup-7-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:42677,DS-26280529-f724-4ea2-95a9-ef1b4940f60b,DISK] 2023-07-16 18:15:28,352 DEBUG [RS-EventLoopGroup-7-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:37737,DS-918fb536-8f98-483c-9977-d31c79d2a9f3,DISK] 2023-07-16 18:15:28,358 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/WALs/jenkins-hbase4.apache.org,44563,1689531327107/jenkins-hbase4.apache.org%2C44563%2C1689531327107.meta.1689531328319.meta 2023-07-16 18:15:28,358 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:36769,DS-6ab893e3-d584-4af5-92b1-09319086e884,DISK], DatanodeInfoWithStorage[127.0.0.1:37737,DS-918fb536-8f98-483c-9977-d31c79d2a9f3,DISK], DatanodeInfoWithStorage[127.0.0.1:42677,DS-26280529-f724-4ea2-95a9-ef1b4940f60b,DISK]] 2023-07-16 18:15:28,359 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-16 18:15:28,359 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-16 18:15:28,359 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-16 18:15:28,359 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-16 18:15:28,359 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-16 18:15:28,359 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 18:15:28,359 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-16 18:15:28,359 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-16 18:15:28,362 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-16 18:15:28,363 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/hbase/meta/1588230740/info 2023-07-16 18:15:28,363 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/hbase/meta/1588230740/info 2023-07-16 18:15:28,364 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-16 18:15:28,375 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/hbase/meta/1588230740/info/bedee7ce31ae49328e9a2af23b711dee 2023-07-16 18:15:28,376 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 18:15:28,377 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-16 18:15:28,378 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/hbase/meta/1588230740/rep_barrier 2023-07-16 18:15:28,378 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/hbase/meta/1588230740/rep_barrier 2023-07-16 18:15:28,379 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-16 18:15:28,379 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 18:15:28,380 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-16 18:15:28,381 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/hbase/meta/1588230740/table 2023-07-16 18:15:28,381 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/hbase/meta/1588230740/table 2023-07-16 18:15:28,382 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-16 18:15:28,394 DEBUG [StoreOpener-1588230740-1] regionserver.HStore(539): loaded hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/hbase/meta/1588230740/table/e5fb4755f2c8409aa73616719ce1ddc4 2023-07-16 18:15:28,394 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 18:15:28,396 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/hbase/meta/1588230740 2023-07-16 18:15:28,398 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/hbase/meta/1588230740 2023-07-16 18:15:28,402 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-16 18:15:28,404 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-16 18:15:28,406 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=19; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11362834400, jitterRate=0.058246418833732605}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-16 18:15:28,406 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-16 18:15:28,407 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=16, masterSystemTime=1689531328305 2023-07-16 18:15:28,412 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-16 18:15:28,412 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-16 18:15:28,413 INFO [PEWorker-1] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,44563,1689531327107, state=OPEN 2023-07-16 18:15:28,414 DEBUG [Listener at localhost/38073-EventThread] zookeeper.ZKWatcher(600): master:45445-0x1016f588ace0000, quorum=127.0.0.1:53498, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-16 18:15:28,415 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-16 18:15:28,418 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=16, resume processing ppid=12 2023-07-16 18:15:28,418 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=16, ppid=12, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,44563,1689531327107 in 270 msec 2023-07-16 18:15:28,420 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, REOPEN/MOVE in 882 msec 2023-07-16 18:15:28,543 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] procedure.ProcedureSyncWait(216): waitFor pid=12 2023-07-16 18:15:28,567 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close dc4034c470728512f24450a6af763b38 2023-07-16 18:15:28,568 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing dc4034c470728512f24450a6af763b38, disabling compactions & flushes 2023-07-16 18:15:28,568 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689531326156.dc4034c470728512f24450a6af763b38. 2023-07-16 18:15:28,569 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689531326156.dc4034c470728512f24450a6af763b38. 2023-07-16 18:15:28,569 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689531326156.dc4034c470728512f24450a6af763b38. after waiting 0 ms 2023-07-16 18:15:28,569 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689531326156.dc4034c470728512f24450a6af763b38. 2023-07-16 18:15:28,569 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing dc4034c470728512f24450a6af763b38 1/1 column families, dataSize=1.38 KB heapSize=2.37 KB 2023-07-16 18:15:28,643 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=1.38 KB at sequenceid=9 (bloomFilter=true), to=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/hbase/rsgroup/dc4034c470728512f24450a6af763b38/.tmp/m/dec38c002fef44ac845985dbcc8166b1 2023-07-16 18:15:28,655 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/hbase/rsgroup/dc4034c470728512f24450a6af763b38/.tmp/m/dec38c002fef44ac845985dbcc8166b1 as hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/hbase/rsgroup/dc4034c470728512f24450a6af763b38/m/dec38c002fef44ac845985dbcc8166b1 2023-07-16 18:15:28,663 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/hbase/rsgroup/dc4034c470728512f24450a6af763b38/m/dec38c002fef44ac845985dbcc8166b1, entries=3, sequenceid=9, filesize=5.2 K 2023-07-16 18:15:28,666 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~1.38 KB/1418, heapSize ~2.35 KB/2408, currentSize=0 B/0 for dc4034c470728512f24450a6af763b38 in 97ms, sequenceid=9, compaction requested=false 2023-07-16 18:15:28,666 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:rsgroup' 2023-07-16 18:15:28,675 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/hbase/rsgroup/dc4034c470728512f24450a6af763b38/recovered.edits/12.seqid, newMaxSeqId=12, maxSeqId=1 2023-07-16 18:15:28,676 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-16 18:15:28,676 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689531326156.dc4034c470728512f24450a6af763b38. 2023-07-16 18:15:28,676 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for dc4034c470728512f24450a6af763b38: 2023-07-16 18:15:28,676 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding dc4034c470728512f24450a6af763b38 move to jenkins-hbase4.apache.org,44563,1689531327107 record at close sequenceid=9 2023-07-16 18:15:28,683 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed dc4034c470728512f24450a6af763b38 2023-07-16 18:15:28,683 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=dc4034c470728512f24450a6af763b38, regionState=CLOSED 2023-07-16 18:15:28,684 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689531326156.dc4034c470728512f24450a6af763b38.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689531328683"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689531328683"}]},"ts":"1689531328683"} 2023-07-16 18:15:28,685 DEBUG [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=33809] ipc.CallRunner(144): callId: 41 service: ClientService methodName: Mutate size: 213 connection: 172.31.14.131:51622 deadline: 1689531388684, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=44563 startCode=1689531327107. As of locationSeqNum=15. 2023-07-16 18:15:28,786 DEBUG [PEWorker-3] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-16 18:15:28,788 INFO [RS-EventLoopGroup-7-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:58408, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-16 18:15:28,795 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=15, resume processing ppid=13 2023-07-16 18:15:28,795 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=15, ppid=13, state=SUCCESS; CloseRegionProcedure dc4034c470728512f24450a6af763b38, server=jenkins-hbase4.apache.org,41927,1689531323590 in 1.2440 sec 2023-07-16 18:15:28,796 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=13, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=dc4034c470728512f24450a6af763b38, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,44563,1689531327107; forceNewPlan=false, retain=false 2023-07-16 18:15:28,946 INFO [jenkins-hbase4:45445] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-16 18:15:28,947 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=dc4034c470728512f24450a6af763b38, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,44563,1689531327107 2023-07-16 18:15:28,947 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689531326156.dc4034c470728512f24450a6af763b38.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689531328947"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689531328947"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689531328947"}]},"ts":"1689531328947"} 2023-07-16 18:15:28,950 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=17, ppid=13, state=RUNNABLE; OpenRegionProcedure dc4034c470728512f24450a6af763b38, server=jenkins-hbase4.apache.org,44563,1689531327107}] 2023-07-16 18:15:29,115 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689531326156.dc4034c470728512f24450a6af763b38. 2023-07-16 18:15:29,115 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => dc4034c470728512f24450a6af763b38, NAME => 'hbase:rsgroup,,1689531326156.dc4034c470728512f24450a6af763b38.', STARTKEY => '', ENDKEY => ''} 2023-07-16 18:15:29,115 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-16 18:15:29,115 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689531326156.dc4034c470728512f24450a6af763b38. service=MultiRowMutationService 2023-07-16 18:15:29,116 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-16 18:15:29,116 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup dc4034c470728512f24450a6af763b38 2023-07-16 18:15:29,116 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689531326156.dc4034c470728512f24450a6af763b38.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 18:15:29,116 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for dc4034c470728512f24450a6af763b38 2023-07-16 18:15:29,116 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for dc4034c470728512f24450a6af763b38 2023-07-16 18:15:29,118 INFO [StoreOpener-dc4034c470728512f24450a6af763b38-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region dc4034c470728512f24450a6af763b38 2023-07-16 18:15:29,120 DEBUG [StoreOpener-dc4034c470728512f24450a6af763b38-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/hbase/rsgroup/dc4034c470728512f24450a6af763b38/m 2023-07-16 18:15:29,120 DEBUG [StoreOpener-dc4034c470728512f24450a6af763b38-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/hbase/rsgroup/dc4034c470728512f24450a6af763b38/m 2023-07-16 18:15:29,121 INFO [StoreOpener-dc4034c470728512f24450a6af763b38-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region dc4034c470728512f24450a6af763b38 columnFamilyName m 2023-07-16 18:15:29,135 DEBUG [StoreOpener-dc4034c470728512f24450a6af763b38-1] regionserver.HStore(539): loaded hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/hbase/rsgroup/dc4034c470728512f24450a6af763b38/m/dec38c002fef44ac845985dbcc8166b1 2023-07-16 18:15:29,135 INFO [StoreOpener-dc4034c470728512f24450a6af763b38-1] regionserver.HStore(310): Store=dc4034c470728512f24450a6af763b38/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 18:15:29,136 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/hbase/rsgroup/dc4034c470728512f24450a6af763b38 2023-07-16 18:15:29,150 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/hbase/rsgroup/dc4034c470728512f24450a6af763b38 2023-07-16 18:15:29,155 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for dc4034c470728512f24450a6af763b38 2023-07-16 18:15:29,157 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened dc4034c470728512f24450a6af763b38; next sequenceid=13; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@7a97b43d, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 18:15:29,157 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for dc4034c470728512f24450a6af763b38: 2023-07-16 18:15:29,158 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689531326156.dc4034c470728512f24450a6af763b38., pid=17, masterSystemTime=1689531329105 2023-07-16 18:15:29,161 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689531326156.dc4034c470728512f24450a6af763b38. 2023-07-16 18:15:29,161 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689531326156.dc4034c470728512f24450a6af763b38. 2023-07-16 18:15:29,162 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=dc4034c470728512f24450a6af763b38, regionState=OPEN, openSeqNum=13, regionLocation=jenkins-hbase4.apache.org,44563,1689531327107 2023-07-16 18:15:29,162 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689531326156.dc4034c470728512f24450a6af763b38.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689531329162"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689531329162"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689531329162"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689531329162"}]},"ts":"1689531329162"} 2023-07-16 18:15:29,168 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=17, resume processing ppid=13 2023-07-16 18:15:29,168 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=17, ppid=13, state=SUCCESS; OpenRegionProcedure dc4034c470728512f24450a6af763b38, server=jenkins-hbase4.apache.org,44563,1689531327107 in 215 msec 2023-07-16 18:15:29,170 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=13, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=dc4034c470728512f24450a6af763b38, REOPEN/MOVE in 1.6280 sec 2023-07-16 18:15:29,543 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] procedure.ProcedureSyncWait(216): waitFor pid=13 2023-07-16 18:15:29,544 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,33809,1689531323219, jenkins-hbase4.apache.org,41927,1689531323590] are moved back to default 2023-07-16 18:15:29,544 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminServer(438): Move servers done: default => Group_testTableMoveTruncateAndDrop_1745006964 2023-07-16 18:15:29,544 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 18:15:29,545 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=41927] ipc.CallRunner(144): callId: 3 service: ClientService methodName: Scan size: 136 connection: 172.31.14.131:48576 deadline: 1689531389545, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=44563 startCode=1689531327107. As of locationSeqNum=9. 2023-07-16 18:15:29,650 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=33809] ipc.CallRunner(144): callId: 4 service: ClientService methodName: Get size: 88 connection: 172.31.14.131:51652 deadline: 1689531389650, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=44563 startCode=1689531327107. As of locationSeqNum=15. 2023-07-16 18:15:29,752 DEBUG [hconnection-0x67375dfc-shared-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-16 18:15:29,757 INFO [RS-EventLoopGroup-7-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:58424, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-16 18:15:29,797 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 18:15:29,797 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 18:15:29,801 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_1745006964 2023-07-16 18:15:29,801 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 18:15:29,813 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-16 18:15:29,814 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] procedure2.ProcedureExecutor(1029): Stored pid=18, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-16 18:15:29,817 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=18, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_PRE_OPERATION 2023-07-16 18:15:29,820 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=41927] ipc.CallRunner(144): callId: 46 service: ClientService methodName: ExecService size: 622 connection: 172.31.14.131:48562 deadline: 1689531389820, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=44563 startCode=1689531327107. As of locationSeqNum=9. 2023-07-16 18:15:29,822 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "Group_testTableMoveTruncateAndDrop" procId is: 18 2023-07-16 18:15:29,832 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(1230): Checking to see if procedure is done pid=18 2023-07-16 18:15:29,929 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_1745006964 2023-07-16 18:15:29,929 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 18:15:29,930 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 18:15:29,930 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-16 18:15:29,937 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(1230): Checking to see if procedure is done pid=18 2023-07-16 18:15:29,940 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=18, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-16 18:15:29,945 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/default/Group_testTableMoveTruncateAndDrop/5ae9f31ecda082e767a2a62ef1f27f72 2023-07-16 18:15:29,945 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/default/Group_testTableMoveTruncateAndDrop/2089fc7d9545c3ff9650ab50dcd21066 2023-07-16 18:15:29,946 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b5f286b1157ed9594a74cb054eb66539 2023-07-16 18:15:29,946 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/default/Group_testTableMoveTruncateAndDrop/63cbed7d5a556a760487d1ada473fcde 2023-07-16 18:15:29,946 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/default/Group_testTableMoveTruncateAndDrop/3816510628b0129bfdf2e5c6472c4c7b 2023-07-16 18:15:29,946 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/default/Group_testTableMoveTruncateAndDrop/3816510628b0129bfdf2e5c6472c4c7b empty. 2023-07-16 18:15:29,946 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/default/Group_testTableMoveTruncateAndDrop/5ae9f31ecda082e767a2a62ef1f27f72 empty. 2023-07-16 18:15:29,947 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/default/Group_testTableMoveTruncateAndDrop/63cbed7d5a556a760487d1ada473fcde empty. 2023-07-16 18:15:29,947 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b5f286b1157ed9594a74cb054eb66539 empty. 2023-07-16 18:15:29,947 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/default/Group_testTableMoveTruncateAndDrop/2089fc7d9545c3ff9650ab50dcd21066 empty. 2023-07-16 18:15:29,947 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/default/Group_testTableMoveTruncateAndDrop/3816510628b0129bfdf2e5c6472c4c7b 2023-07-16 18:15:29,947 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/default/Group_testTableMoveTruncateAndDrop/5ae9f31ecda082e767a2a62ef1f27f72 2023-07-16 18:15:29,950 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/default/Group_testTableMoveTruncateAndDrop/63cbed7d5a556a760487d1ada473fcde 2023-07-16 18:15:29,950 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b5f286b1157ed9594a74cb054eb66539 2023-07-16 18:15:29,950 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/default/Group_testTableMoveTruncateAndDrop/2089fc7d9545c3ff9650ab50dcd21066 2023-07-16 18:15:29,950 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-16 18:15:29,985 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/default/Group_testTableMoveTruncateAndDrop/.tabledesc/.tableinfo.0000000001 2023-07-16 18:15:29,987 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => 5ae9f31ecda082e767a2a62ef1f27f72, NAME => 'Group_testTableMoveTruncateAndDrop,,1689531329810.5ae9f31ecda082e767a2a62ef1f27f72.', STARTKEY => '', ENDKEY => 'aaaaa'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp 2023-07-16 18:15:29,987 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(7675): creating {ENCODED => 2089fc7d9545c3ff9650ab50dcd21066, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689531329810.2089fc7d9545c3ff9650ab50dcd21066.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp 2023-07-16 18:15:29,987 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(7675): creating {ENCODED => b5f286b1157ed9594a74cb054eb66539, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689531329810.b5f286b1157ed9594a74cb054eb66539.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp 2023-07-16 18:15:30,023 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689531329810.b5f286b1157ed9594a74cb054eb66539.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 18:15:30,024 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1604): Closing b5f286b1157ed9594a74cb054eb66539, disabling compactions & flushes 2023-07-16 18:15:30,024 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689531329810.b5f286b1157ed9594a74cb054eb66539. 2023-07-16 18:15:30,024 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689531329810.b5f286b1157ed9594a74cb054eb66539. 2023-07-16 18:15:30,024 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689531329810.b5f286b1157ed9594a74cb054eb66539. after waiting 0 ms 2023-07-16 18:15:30,024 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689531329810.b5f286b1157ed9594a74cb054eb66539. 2023-07-16 18:15:30,024 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689531329810.b5f286b1157ed9594a74cb054eb66539. 2023-07-16 18:15:30,024 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689531329810.5ae9f31ecda082e767a2a62ef1f27f72.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 18:15:30,024 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689531329810.2089fc7d9545c3ff9650ab50dcd21066.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 18:15:30,024 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1558): Region close journal for b5f286b1157ed9594a74cb054eb66539: 2023-07-16 18:15:30,027 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1604): Closing 2089fc7d9545c3ff9650ab50dcd21066, disabling compactions & flushes 2023-07-16 18:15:30,027 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1604): Closing 5ae9f31ecda082e767a2a62ef1f27f72, disabling compactions & flushes 2023-07-16 18:15:30,027 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689531329810.5ae9f31ecda082e767a2a62ef1f27f72. 2023-07-16 18:15:30,027 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689531329810.2089fc7d9545c3ff9650ab50dcd21066. 2023-07-16 18:15:30,027 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(7675): creating {ENCODED => 63cbed7d5a556a760487d1ada473fcde, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689531329810.63cbed7d5a556a760487d1ada473fcde.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp 2023-07-16 18:15:30,027 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689531329810.5ae9f31ecda082e767a2a62ef1f27f72. 2023-07-16 18:15:30,027 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689531329810.2089fc7d9545c3ff9650ab50dcd21066. 2023-07-16 18:15:30,028 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689531329810.5ae9f31ecda082e767a2a62ef1f27f72. after waiting 0 ms 2023-07-16 18:15:30,028 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689531329810.2089fc7d9545c3ff9650ab50dcd21066. after waiting 0 ms 2023-07-16 18:15:30,028 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689531329810.5ae9f31ecda082e767a2a62ef1f27f72. 2023-07-16 18:15:30,028 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689531329810.2089fc7d9545c3ff9650ab50dcd21066. 2023-07-16 18:15:30,028 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689531329810.5ae9f31ecda082e767a2a62ef1f27f72. 2023-07-16 18:15:30,029 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689531329810.2089fc7d9545c3ff9650ab50dcd21066. 2023-07-16 18:15:30,029 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for 5ae9f31ecda082e767a2a62ef1f27f72: 2023-07-16 18:15:30,029 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1558): Region close journal for 2089fc7d9545c3ff9650ab50dcd21066: 2023-07-16 18:15:30,029 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => 3816510628b0129bfdf2e5c6472c4c7b, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689531329810.3816510628b0129bfdf2e5c6472c4c7b.', STARTKEY => 'zzzzz', ENDKEY => ''}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp 2023-07-16 18:15:30,054 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689531329810.3816510628b0129bfdf2e5c6472c4c7b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 18:15:30,055 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689531329810.63cbed7d5a556a760487d1ada473fcde.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 18:15:30,055 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1604): Closing 3816510628b0129bfdf2e5c6472c4c7b, disabling compactions & flushes 2023-07-16 18:15:30,055 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1604): Closing 63cbed7d5a556a760487d1ada473fcde, disabling compactions & flushes 2023-07-16 18:15:30,055 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689531329810.3816510628b0129bfdf2e5c6472c4c7b. 2023-07-16 18:15:30,055 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689531329810.63cbed7d5a556a760487d1ada473fcde. 2023-07-16 18:15:30,055 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689531329810.3816510628b0129bfdf2e5c6472c4c7b. 2023-07-16 18:15:30,055 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689531329810.63cbed7d5a556a760487d1ada473fcde. 2023-07-16 18:15:30,055 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689531329810.3816510628b0129bfdf2e5c6472c4c7b. after waiting 0 ms 2023-07-16 18:15:30,055 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689531329810.63cbed7d5a556a760487d1ada473fcde. after waiting 0 ms 2023-07-16 18:15:30,055 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689531329810.3816510628b0129bfdf2e5c6472c4c7b. 2023-07-16 18:15:30,056 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689531329810.63cbed7d5a556a760487d1ada473fcde. 2023-07-16 18:15:30,056 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689531329810.3816510628b0129bfdf2e5c6472c4c7b. 2023-07-16 18:15:30,056 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689531329810.63cbed7d5a556a760487d1ada473fcde. 2023-07-16 18:15:30,056 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for 3816510628b0129bfdf2e5c6472c4c7b: 2023-07-16 18:15:30,056 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1558): Region close journal for 63cbed7d5a556a760487d1ada473fcde: 2023-07-16 18:15:30,059 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=18, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_ADD_TO_META 2023-07-16 18:15:30,060 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689531329810.b5f286b1157ed9594a74cb054eb66539.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689531330060"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689531330060"}]},"ts":"1689531330060"} 2023-07-16 18:15:30,060 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689531329810.5ae9f31ecda082e767a2a62ef1f27f72.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689531330060"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689531330060"}]},"ts":"1689531330060"} 2023-07-16 18:15:30,061 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689531329810.2089fc7d9545c3ff9650ab50dcd21066.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689531330060"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689531330060"}]},"ts":"1689531330060"} 2023-07-16 18:15:30,061 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689531329810.3816510628b0129bfdf2e5c6472c4c7b.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689531330060"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689531330060"}]},"ts":"1689531330060"} 2023-07-16 18:15:30,061 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689531329810.63cbed7d5a556a760487d1ada473fcde.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689531330060"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689531330060"}]},"ts":"1689531330060"} 2023-07-16 18:15:30,138 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(1230): Checking to see if procedure is done pid=18 2023-07-16 18:15:30,139 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 5 regions to meta. 2023-07-16 18:15:30,140 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=18, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-16 18:15:30,141 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689531330140"}]},"ts":"1689531330140"} 2023-07-16 18:15:30,142 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLING in hbase:meta 2023-07-16 18:15:30,147 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-16 18:15:30,147 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-16 18:15:30,147 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-16 18:15:30,148 DEBUG [PEWorker-2] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-16 18:15:30,148 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=19, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=5ae9f31ecda082e767a2a62ef1f27f72, ASSIGN}, {pid=20, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=2089fc7d9545c3ff9650ab50dcd21066, ASSIGN}, {pid=21, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b5f286b1157ed9594a74cb054eb66539, ASSIGN}, {pid=22, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=63cbed7d5a556a760487d1ada473fcde, ASSIGN}, {pid=23, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=3816510628b0129bfdf2e5c6472c4c7b, ASSIGN}] 2023-07-16 18:15:30,151 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=20, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=2089fc7d9545c3ff9650ab50dcd21066, ASSIGN 2023-07-16 18:15:30,151 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=19, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=5ae9f31ecda082e767a2a62ef1f27f72, ASSIGN 2023-07-16 18:15:30,152 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=21, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b5f286b1157ed9594a74cb054eb66539, ASSIGN 2023-07-16 18:15:30,153 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=22, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=63cbed7d5a556a760487d1ada473fcde, ASSIGN 2023-07-16 18:15:30,153 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=20, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=2089fc7d9545c3ff9650ab50dcd21066, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,43375,1689531323422; forceNewPlan=false, retain=false 2023-07-16 18:15:30,154 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=19, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=5ae9f31ecda082e767a2a62ef1f27f72, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,44563,1689531327107; forceNewPlan=false, retain=false 2023-07-16 18:15:30,154 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=21, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b5f286b1157ed9594a74cb054eb66539, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,43375,1689531323422; forceNewPlan=false, retain=false 2023-07-16 18:15:30,154 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=22, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=63cbed7d5a556a760487d1ada473fcde, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,43375,1689531323422; forceNewPlan=false, retain=false 2023-07-16 18:15:30,155 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=23, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=3816510628b0129bfdf2e5c6472c4c7b, ASSIGN 2023-07-16 18:15:30,156 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=23, ppid=18, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=3816510628b0129bfdf2e5c6472c4c7b, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,44563,1689531327107; forceNewPlan=false, retain=false 2023-07-16 18:15:30,304 INFO [jenkins-hbase4:45445] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-16 18:15:30,308 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=19 updating hbase:meta row=5ae9f31ecda082e767a2a62ef1f27f72, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,44563,1689531327107 2023-07-16 18:15:30,308 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=22 updating hbase:meta row=63cbed7d5a556a760487d1ada473fcde, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43375,1689531323422 2023-07-16 18:15:30,308 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689531329810.5ae9f31ecda082e767a2a62ef1f27f72.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689531330307"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689531330307"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689531330307"}]},"ts":"1689531330307"} 2023-07-16 18:15:30,308 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689531329810.63cbed7d5a556a760487d1ada473fcde.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689531330308"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689531330308"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689531330308"}]},"ts":"1689531330308"} 2023-07-16 18:15:30,308 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=23 updating hbase:meta row=3816510628b0129bfdf2e5c6472c4c7b, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,44563,1689531327107 2023-07-16 18:15:30,308 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689531329810.3816510628b0129bfdf2e5c6472c4c7b.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689531330307"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689531330307"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689531330307"}]},"ts":"1689531330307"} 2023-07-16 18:15:30,308 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=20 updating hbase:meta row=2089fc7d9545c3ff9650ab50dcd21066, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43375,1689531323422 2023-07-16 18:15:30,308 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=21 updating hbase:meta row=b5f286b1157ed9594a74cb054eb66539, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43375,1689531323422 2023-07-16 18:15:30,309 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689531329810.2089fc7d9545c3ff9650ab50dcd21066.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689531330308"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689531330308"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689531330308"}]},"ts":"1689531330308"} 2023-07-16 18:15:30,309 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689531329810.b5f286b1157ed9594a74cb054eb66539.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689531330308"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689531330308"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689531330308"}]},"ts":"1689531330308"} 2023-07-16 18:15:30,314 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=24, ppid=19, state=RUNNABLE; OpenRegionProcedure 5ae9f31ecda082e767a2a62ef1f27f72, server=jenkins-hbase4.apache.org,44563,1689531327107}] 2023-07-16 18:15:30,316 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=25, ppid=22, state=RUNNABLE; OpenRegionProcedure 63cbed7d5a556a760487d1ada473fcde, server=jenkins-hbase4.apache.org,43375,1689531323422}] 2023-07-16 18:15:30,319 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=26, ppid=23, state=RUNNABLE; OpenRegionProcedure 3816510628b0129bfdf2e5c6472c4c7b, server=jenkins-hbase4.apache.org,44563,1689531327107}] 2023-07-16 18:15:30,320 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=27, ppid=20, state=RUNNABLE; OpenRegionProcedure 2089fc7d9545c3ff9650ab50dcd21066, server=jenkins-hbase4.apache.org,43375,1689531323422}] 2023-07-16 18:15:30,321 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=28, ppid=21, state=RUNNABLE; OpenRegionProcedure b5f286b1157ed9594a74cb054eb66539, server=jenkins-hbase4.apache.org,43375,1689531323422}] 2023-07-16 18:15:30,440 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(1230): Checking to see if procedure is done pid=18 2023-07-16 18:15:30,473 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,,1689531329810.5ae9f31ecda082e767a2a62ef1f27f72. 2023-07-16 18:15:30,473 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689531329810.b5f286b1157ed9594a74cb054eb66539. 2023-07-16 18:15:30,473 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 5ae9f31ecda082e767a2a62ef1f27f72, NAME => 'Group_testTableMoveTruncateAndDrop,,1689531329810.5ae9f31ecda082e767a2a62ef1f27f72.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-16 18:15:30,473 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => b5f286b1157ed9594a74cb054eb66539, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689531329810.b5f286b1157ed9594a74cb054eb66539.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-16 18:15:30,474 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 5ae9f31ecda082e767a2a62ef1f27f72 2023-07-16 18:15:30,474 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689531329810.5ae9f31ecda082e767a2a62ef1f27f72.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 18:15:30,474 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop b5f286b1157ed9594a74cb054eb66539 2023-07-16 18:15:30,474 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 5ae9f31ecda082e767a2a62ef1f27f72 2023-07-16 18:15:30,474 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689531329810.b5f286b1157ed9594a74cb054eb66539.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 18:15:30,474 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 5ae9f31ecda082e767a2a62ef1f27f72 2023-07-16 18:15:30,474 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for b5f286b1157ed9594a74cb054eb66539 2023-07-16 18:15:30,474 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for b5f286b1157ed9594a74cb054eb66539 2023-07-16 18:15:30,477 INFO [StoreOpener-5ae9f31ecda082e767a2a62ef1f27f72-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 5ae9f31ecda082e767a2a62ef1f27f72 2023-07-16 18:15:30,477 INFO [StoreOpener-b5f286b1157ed9594a74cb054eb66539-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region b5f286b1157ed9594a74cb054eb66539 2023-07-16 18:15:30,480 DEBUG [StoreOpener-b5f286b1157ed9594a74cb054eb66539-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/Group_testTableMoveTruncateAndDrop/b5f286b1157ed9594a74cb054eb66539/f 2023-07-16 18:15:30,480 DEBUG [StoreOpener-b5f286b1157ed9594a74cb054eb66539-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/Group_testTableMoveTruncateAndDrop/b5f286b1157ed9594a74cb054eb66539/f 2023-07-16 18:15:30,480 DEBUG [StoreOpener-5ae9f31ecda082e767a2a62ef1f27f72-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/Group_testTableMoveTruncateAndDrop/5ae9f31ecda082e767a2a62ef1f27f72/f 2023-07-16 18:15:30,480 DEBUG [StoreOpener-5ae9f31ecda082e767a2a62ef1f27f72-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/Group_testTableMoveTruncateAndDrop/5ae9f31ecda082e767a2a62ef1f27f72/f 2023-07-16 18:15:30,480 INFO [StoreOpener-5ae9f31ecda082e767a2a62ef1f27f72-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 5ae9f31ecda082e767a2a62ef1f27f72 columnFamilyName f 2023-07-16 18:15:30,480 INFO [StoreOpener-b5f286b1157ed9594a74cb054eb66539-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region b5f286b1157ed9594a74cb054eb66539 columnFamilyName f 2023-07-16 18:15:30,481 INFO [StoreOpener-5ae9f31ecda082e767a2a62ef1f27f72-1] regionserver.HStore(310): Store=5ae9f31ecda082e767a2a62ef1f27f72/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 18:15:30,493 INFO [StoreOpener-b5f286b1157ed9594a74cb054eb66539-1] regionserver.HStore(310): Store=b5f286b1157ed9594a74cb054eb66539/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 18:15:30,516 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/Group_testTableMoveTruncateAndDrop/5ae9f31ecda082e767a2a62ef1f27f72 2023-07-16 18:15:30,517 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/Group_testTableMoveTruncateAndDrop/b5f286b1157ed9594a74cb054eb66539 2023-07-16 18:15:30,517 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/Group_testTableMoveTruncateAndDrop/5ae9f31ecda082e767a2a62ef1f27f72 2023-07-16 18:15:30,518 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/Group_testTableMoveTruncateAndDrop/b5f286b1157ed9594a74cb054eb66539 2023-07-16 18:15:30,522 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for b5f286b1157ed9594a74cb054eb66539 2023-07-16 18:15:30,523 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 5ae9f31ecda082e767a2a62ef1f27f72 2023-07-16 18:15:30,556 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/Group_testTableMoveTruncateAndDrop/b5f286b1157ed9594a74cb054eb66539/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-16 18:15:30,557 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/Group_testTableMoveTruncateAndDrop/5ae9f31ecda082e767a2a62ef1f27f72/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-16 18:15:30,557 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened b5f286b1157ed9594a74cb054eb66539; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11650626880, jitterRate=0.08504918217658997}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 18:15:30,557 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for b5f286b1157ed9594a74cb054eb66539: 2023-07-16 18:15:30,558 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 5ae9f31ecda082e767a2a62ef1f27f72; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11186908960, jitterRate=0.041862085461616516}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 18:15:30,558 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 5ae9f31ecda082e767a2a62ef1f27f72: 2023-07-16 18:15:30,559 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689531329810.b5f286b1157ed9594a74cb054eb66539., pid=28, masterSystemTime=1689531330469 2023-07-16 18:15:30,560 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,,1689531329810.5ae9f31ecda082e767a2a62ef1f27f72., pid=24, masterSystemTime=1689531330467 2023-07-16 18:15:30,562 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689531329810.b5f286b1157ed9594a74cb054eb66539. 2023-07-16 18:15:30,562 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689531329810.b5f286b1157ed9594a74cb054eb66539. 2023-07-16 18:15:30,562 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689531329810.63cbed7d5a556a760487d1ada473fcde. 2023-07-16 18:15:30,563 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 63cbed7d5a556a760487d1ada473fcde, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689531329810.63cbed7d5a556a760487d1ada473fcde.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-16 18:15:30,563 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 63cbed7d5a556a760487d1ada473fcde 2023-07-16 18:15:30,563 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689531329810.63cbed7d5a556a760487d1ada473fcde.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 18:15:30,563 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 63cbed7d5a556a760487d1ada473fcde 2023-07-16 18:15:30,563 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 63cbed7d5a556a760487d1ada473fcde 2023-07-16 18:15:30,565 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=21 updating hbase:meta row=b5f286b1157ed9594a74cb054eb66539, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,43375,1689531323422 2023-07-16 18:15:30,565 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689531329810.b5f286b1157ed9594a74cb054eb66539.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689531330565"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689531330565"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689531330565"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689531330565"}]},"ts":"1689531330565"} 2023-07-16 18:15:30,566 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=19 updating hbase:meta row=5ae9f31ecda082e767a2a62ef1f27f72, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,44563,1689531327107 2023-07-16 18:15:30,566 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,,1689531329810.5ae9f31ecda082e767a2a62ef1f27f72. 2023-07-16 18:15:30,599 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,,1689531329810.5ae9f31ecda082e767a2a62ef1f27f72. 2023-07-16 18:15:30,599 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,zzzzz,1689531329810.3816510628b0129bfdf2e5c6472c4c7b. 2023-07-16 18:15:30,599 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 3816510628b0129bfdf2e5c6472c4c7b, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689531329810.3816510628b0129bfdf2e5c6472c4c7b.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-16 18:15:30,600 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 3816510628b0129bfdf2e5c6472c4c7b 2023-07-16 18:15:30,588 INFO [StoreOpener-63cbed7d5a556a760487d1ada473fcde-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 63cbed7d5a556a760487d1ada473fcde 2023-07-16 18:15:30,574 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=21, ppid=18, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b5f286b1157ed9594a74cb054eb66539, ASSIGN in 423 msec 2023-07-16 18:15:30,571 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=28, resume processing ppid=21 2023-07-16 18:15:30,566 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,,1689531329810.5ae9f31ecda082e767a2a62ef1f27f72.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689531330566"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689531330566"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689531330566"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689531330566"}]},"ts":"1689531330566"} 2023-07-16 18:15:30,600 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=28, ppid=21, state=SUCCESS; OpenRegionProcedure b5f286b1157ed9594a74cb054eb66539, server=jenkins-hbase4.apache.org,43375,1689531323422 in 247 msec 2023-07-16 18:15:30,600 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689531329810.3816510628b0129bfdf2e5c6472c4c7b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 18:15:30,602 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 3816510628b0129bfdf2e5c6472c4c7b 2023-07-16 18:15:30,602 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 3816510628b0129bfdf2e5c6472c4c7b 2023-07-16 18:15:30,618 DEBUG [StoreOpener-63cbed7d5a556a760487d1ada473fcde-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/Group_testTableMoveTruncateAndDrop/63cbed7d5a556a760487d1ada473fcde/f 2023-07-16 18:15:30,619 DEBUG [StoreOpener-63cbed7d5a556a760487d1ada473fcde-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/Group_testTableMoveTruncateAndDrop/63cbed7d5a556a760487d1ada473fcde/f 2023-07-16 18:15:30,620 INFO [StoreOpener-63cbed7d5a556a760487d1ada473fcde-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 63cbed7d5a556a760487d1ada473fcde columnFamilyName f 2023-07-16 18:15:30,621 INFO [StoreOpener-63cbed7d5a556a760487d1ada473fcde-1] regionserver.HStore(310): Store=63cbed7d5a556a760487d1ada473fcde/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 18:15:30,623 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=24, resume processing ppid=19 2023-07-16 18:15:30,636 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=24, ppid=19, state=SUCCESS; OpenRegionProcedure 5ae9f31ecda082e767a2a62ef1f27f72, server=jenkins-hbase4.apache.org,44563,1689531327107 in 304 msec 2023-07-16 18:15:30,625 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=19, ppid=18, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=5ae9f31ecda082e767a2a62ef1f27f72, ASSIGN in 475 msec 2023-07-16 18:15:30,637 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/Group_testTableMoveTruncateAndDrop/63cbed7d5a556a760487d1ada473fcde 2023-07-16 18:15:30,637 INFO [StoreOpener-3816510628b0129bfdf2e5c6472c4c7b-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 3816510628b0129bfdf2e5c6472c4c7b 2023-07-16 18:15:30,638 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/Group_testTableMoveTruncateAndDrop/63cbed7d5a556a760487d1ada473fcde 2023-07-16 18:15:30,641 DEBUG [StoreOpener-3816510628b0129bfdf2e5c6472c4c7b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/Group_testTableMoveTruncateAndDrop/3816510628b0129bfdf2e5c6472c4c7b/f 2023-07-16 18:15:30,641 DEBUG [StoreOpener-3816510628b0129bfdf2e5c6472c4c7b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/Group_testTableMoveTruncateAndDrop/3816510628b0129bfdf2e5c6472c4c7b/f 2023-07-16 18:15:30,641 INFO [StoreOpener-3816510628b0129bfdf2e5c6472c4c7b-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 3816510628b0129bfdf2e5c6472c4c7b columnFamilyName f 2023-07-16 18:15:30,642 INFO [StoreOpener-3816510628b0129bfdf2e5c6472c4c7b-1] regionserver.HStore(310): Store=3816510628b0129bfdf2e5c6472c4c7b/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 18:15:30,644 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/Group_testTableMoveTruncateAndDrop/3816510628b0129bfdf2e5c6472c4c7b 2023-07-16 18:15:30,644 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 63cbed7d5a556a760487d1ada473fcde 2023-07-16 18:15:30,652 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/Group_testTableMoveTruncateAndDrop/3816510628b0129bfdf2e5c6472c4c7b 2023-07-16 18:15:30,657 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 3816510628b0129bfdf2e5c6472c4c7b 2023-07-16 18:15:30,683 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/Group_testTableMoveTruncateAndDrop/63cbed7d5a556a760487d1ada473fcde/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-16 18:15:30,684 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 63cbed7d5a556a760487d1ada473fcde; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10109067200, jitterRate=-0.05851975083351135}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 18:15:30,684 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 63cbed7d5a556a760487d1ada473fcde: 2023-07-16 18:15:30,686 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689531329810.63cbed7d5a556a760487d1ada473fcde., pid=25, masterSystemTime=1689531330469 2023-07-16 18:15:30,691 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689531329810.63cbed7d5a556a760487d1ada473fcde. 2023-07-16 18:15:30,691 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689531329810.63cbed7d5a556a760487d1ada473fcde. 2023-07-16 18:15:30,691 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,aaaaa,1689531329810.2089fc7d9545c3ff9650ab50dcd21066. 2023-07-16 18:15:30,692 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 2089fc7d9545c3ff9650ab50dcd21066, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689531329810.2089fc7d9545c3ff9650ab50dcd21066.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-16 18:15:30,693 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/Group_testTableMoveTruncateAndDrop/3816510628b0129bfdf2e5c6472c4c7b/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-16 18:15:30,695 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 2089fc7d9545c3ff9650ab50dcd21066 2023-07-16 18:15:30,695 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689531329810.2089fc7d9545c3ff9650ab50dcd21066.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 18:15:30,695 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 2089fc7d9545c3ff9650ab50dcd21066 2023-07-16 18:15:30,695 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 2089fc7d9545c3ff9650ab50dcd21066 2023-07-16 18:15:30,695 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 3816510628b0129bfdf2e5c6472c4c7b; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9489592800, jitterRate=-0.11621280014514923}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 18:15:30,696 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 3816510628b0129bfdf2e5c6472c4c7b: 2023-07-16 18:15:30,697 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,zzzzz,1689531329810.3816510628b0129bfdf2e5c6472c4c7b., pid=26, masterSystemTime=1689531330467 2023-07-16 18:15:30,697 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=22 updating hbase:meta row=63cbed7d5a556a760487d1ada473fcde, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,43375,1689531323422 2023-07-16 18:15:30,698 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689531329810.63cbed7d5a556a760487d1ada473fcde.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689531330697"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689531330697"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689531330697"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689531330697"}]},"ts":"1689531330697"} 2023-07-16 18:15:30,702 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,zzzzz,1689531329810.3816510628b0129bfdf2e5c6472c4c7b. 2023-07-16 18:15:30,702 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,zzzzz,1689531329810.3816510628b0129bfdf2e5c6472c4c7b. 2023-07-16 18:15:30,703 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=23 updating hbase:meta row=3816510628b0129bfdf2e5c6472c4c7b, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,44563,1689531327107 2023-07-16 18:15:30,716 INFO [StoreOpener-2089fc7d9545c3ff9650ab50dcd21066-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 2089fc7d9545c3ff9650ab50dcd21066 2023-07-16 18:15:30,716 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689531329810.3816510628b0129bfdf2e5c6472c4c7b.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689531330703"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689531330703"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689531330703"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689531330703"}]},"ts":"1689531330703"} 2023-07-16 18:15:30,708 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=22, ppid=18, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=63cbed7d5a556a760487d1ada473fcde, ASSIGN in 558 msec 2023-07-16 18:15:30,706 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=25, resume processing ppid=22 2023-07-16 18:15:30,719 DEBUG [StoreOpener-2089fc7d9545c3ff9650ab50dcd21066-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/Group_testTableMoveTruncateAndDrop/2089fc7d9545c3ff9650ab50dcd21066/f 2023-07-16 18:15:30,721 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=25, ppid=22, state=SUCCESS; OpenRegionProcedure 63cbed7d5a556a760487d1ada473fcde, server=jenkins-hbase4.apache.org,43375,1689531323422 in 386 msec 2023-07-16 18:15:30,721 DEBUG [StoreOpener-2089fc7d9545c3ff9650ab50dcd21066-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/Group_testTableMoveTruncateAndDrop/2089fc7d9545c3ff9650ab50dcd21066/f 2023-07-16 18:15:30,722 INFO [StoreOpener-2089fc7d9545c3ff9650ab50dcd21066-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 2089fc7d9545c3ff9650ab50dcd21066 columnFamilyName f 2023-07-16 18:15:30,723 INFO [StoreOpener-2089fc7d9545c3ff9650ab50dcd21066-1] regionserver.HStore(310): Store=2089fc7d9545c3ff9650ab50dcd21066/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 18:15:30,724 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/Group_testTableMoveTruncateAndDrop/2089fc7d9545c3ff9650ab50dcd21066 2023-07-16 18:15:30,728 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/Group_testTableMoveTruncateAndDrop/2089fc7d9545c3ff9650ab50dcd21066 2023-07-16 18:15:30,759 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 2089fc7d9545c3ff9650ab50dcd21066 2023-07-16 18:15:30,762 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=26, resume processing ppid=23 2023-07-16 18:15:30,762 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=26, ppid=23, state=SUCCESS; OpenRegionProcedure 3816510628b0129bfdf2e5c6472c4c7b, server=jenkins-hbase4.apache.org,44563,1689531327107 in 400 msec 2023-07-16 18:15:30,768 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=23, ppid=18, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=3816510628b0129bfdf2e5c6472c4c7b, ASSIGN in 614 msec 2023-07-16 18:15:30,784 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/Group_testTableMoveTruncateAndDrop/2089fc7d9545c3ff9650ab50dcd21066/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-16 18:15:30,791 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 2089fc7d9545c3ff9650ab50dcd21066; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10780850240, jitterRate=0.004044920206069946}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 18:15:30,791 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 2089fc7d9545c3ff9650ab50dcd21066: 2023-07-16 18:15:30,792 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,aaaaa,1689531329810.2089fc7d9545c3ff9650ab50dcd21066., pid=27, masterSystemTime=1689531330469 2023-07-16 18:15:30,795 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,aaaaa,1689531329810.2089fc7d9545c3ff9650ab50dcd21066. 2023-07-16 18:15:30,795 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,aaaaa,1689531329810.2089fc7d9545c3ff9650ab50dcd21066. 2023-07-16 18:15:30,799 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=20 updating hbase:meta row=2089fc7d9545c3ff9650ab50dcd21066, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,43375,1689531323422 2023-07-16 18:15:30,800 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689531329810.2089fc7d9545c3ff9650ab50dcd21066.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689531330798"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689531330798"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689531330798"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689531330798"}]},"ts":"1689531330798"} 2023-07-16 18:15:30,814 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=27, resume processing ppid=20 2023-07-16 18:15:30,814 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=27, ppid=20, state=SUCCESS; OpenRegionProcedure 2089fc7d9545c3ff9650ab50dcd21066, server=jenkins-hbase4.apache.org,43375,1689531323422 in 484 msec 2023-07-16 18:15:30,817 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=20, resume processing ppid=18 2023-07-16 18:15:30,819 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=20, ppid=18, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=2089fc7d9545c3ff9650ab50dcd21066, ASSIGN in 666 msec 2023-07-16 18:15:30,824 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=18, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-16 18:15:30,825 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689531330824"}]},"ts":"1689531330824"} 2023-07-16 18:15:30,832 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLED in hbase:meta 2023-07-16 18:15:30,848 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=18, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop execute state=CREATE_TABLE_POST_OPERATION 2023-07-16 18:15:30,851 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=18, state=SUCCESS; CreateTableProcedure table=Group_testTableMoveTruncateAndDrop in 1.0350 sec 2023-07-16 18:15:30,942 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(1230): Checking to see if procedure is done pid=18 2023-07-16 18:15:30,943 INFO [Listener at localhost/38073] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 18 completed 2023-07-16 18:15:30,943 DEBUG [Listener at localhost/38073] hbase.HBaseTestingUtility(3430): Waiting until all regions of table Group_testTableMoveTruncateAndDrop get assigned. Timeout = 60000ms 2023-07-16 18:15:30,945 INFO [Listener at localhost/38073] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 18:15:30,946 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=33809] ipc.CallRunner(144): callId: 49 service: ClientService methodName: Scan size: 95 connection: 172.31.14.131:51650 deadline: 1689531390946, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=44563 startCode=1689531327107. As of locationSeqNum=15. 2023-07-16 18:15:31,051 DEBUG [hconnection-0x1a923ff9-shared-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-16 18:15:31,061 INFO [RS-EventLoopGroup-7-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:59666, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-16 18:15:31,075 INFO [Listener at localhost/38073] hbase.HBaseTestingUtility(3484): All regions for table Group_testTableMoveTruncateAndDrop assigned to meta. Checking AM states. 2023-07-16 18:15:31,076 INFO [Listener at localhost/38073] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 18:15:31,076 INFO [Listener at localhost/38073] hbase.HBaseTestingUtility(3504): All regions for table Group_testTableMoveTruncateAndDrop assigned. 2023-07-16 18:15:31,077 INFO [Listener at localhost/38073] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 18:15:31,082 DEBUG [Listener at localhost/38073] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-16 18:15:31,084 INFO [RS-EventLoopGroup-3-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:43186, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-16 18:15:31,086 DEBUG [Listener at localhost/38073] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-16 18:15:31,089 INFO [RS-EventLoopGroup-5-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:35464, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-16 18:15:31,094 DEBUG [Listener at localhost/38073] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-16 18:15:31,096 INFO [RS-EventLoopGroup-4-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:42200, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-16 18:15:31,098 DEBUG [Listener at localhost/38073] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-16 18:15:31,100 INFO [RS-EventLoopGroup-7-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:59676, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-16 18:15:31,111 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=Group_testTableMoveTruncateAndDrop 2023-07-16 18:15:31,111 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-16 18:15:31,112 INFO [Listener at localhost/38073] rsgroup.TestRSGroupsAdmin1(307): Moving table Group_testTableMoveTruncateAndDrop to Group_testTableMoveTruncateAndDrop_1745006964 2023-07-16 18:15:31,121 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [Group_testTableMoveTruncateAndDrop] to rsgroup Group_testTableMoveTruncateAndDrop_1745006964 2023-07-16 18:15:31,125 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_1745006964 2023-07-16 18:15:31,126 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 18:15:31,127 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 18:15:31,127 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-16 18:15:31,134 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminServer(339): Moving region(s) for table Group_testTableMoveTruncateAndDrop to RSGroup Group_testTableMoveTruncateAndDrop_1745006964 2023-07-16 18:15:31,134 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminServer(345): Moving region 5ae9f31ecda082e767a2a62ef1f27f72 to RSGroup Group_testTableMoveTruncateAndDrop_1745006964 2023-07-16 18:15:31,135 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-16 18:15:31,135 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-16 18:15:31,135 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-16 18:15:31,135 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-16 18:15:31,135 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-16 18:15:31,137 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] procedure2.ProcedureExecutor(1029): Stored pid=29, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=5ae9f31ecda082e767a2a62ef1f27f72, REOPEN/MOVE 2023-07-16 18:15:31,137 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminServer(345): Moving region 2089fc7d9545c3ff9650ab50dcd21066 to RSGroup Group_testTableMoveTruncateAndDrop_1745006964 2023-07-16 18:15:31,138 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=29, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=5ae9f31ecda082e767a2a62ef1f27f72, REOPEN/MOVE 2023-07-16 18:15:31,139 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-16 18:15:31,139 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-16 18:15:31,139 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-16 18:15:31,139 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-16 18:15:31,139 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-16 18:15:31,140 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=29 updating hbase:meta row=5ae9f31ecda082e767a2a62ef1f27f72, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,44563,1689531327107 2023-07-16 18:15:31,140 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] procedure2.ProcedureExecutor(1029): Stored pid=30, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=2089fc7d9545c3ff9650ab50dcd21066, REOPEN/MOVE 2023-07-16 18:15:31,140 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminServer(345): Moving region b5f286b1157ed9594a74cb054eb66539 to RSGroup Group_testTableMoveTruncateAndDrop_1745006964 2023-07-16 18:15:31,140 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689531329810.5ae9f31ecda082e767a2a62ef1f27f72.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689531331140"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689531331140"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689531331140"}]},"ts":"1689531331140"} 2023-07-16 18:15:31,142 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=30, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=2089fc7d9545c3ff9650ab50dcd21066, REOPEN/MOVE 2023-07-16 18:15:31,142 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-16 18:15:31,142 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-16 18:15:31,142 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-16 18:15:31,142 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-16 18:15:31,142 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-16 18:15:31,143 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=30 updating hbase:meta row=2089fc7d9545c3ff9650ab50dcd21066, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,43375,1689531323422 2023-07-16 18:15:31,143 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] procedure2.ProcedureExecutor(1029): Stored pid=31, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b5f286b1157ed9594a74cb054eb66539, REOPEN/MOVE 2023-07-16 18:15:31,143 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminServer(345): Moving region 63cbed7d5a556a760487d1ada473fcde to RSGroup Group_testTableMoveTruncateAndDrop_1745006964 2023-07-16 18:15:31,143 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689531329810.2089fc7d9545c3ff9650ab50dcd21066.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689531331143"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689531331143"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689531331143"}]},"ts":"1689531331143"} 2023-07-16 18:15:31,145 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=31, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b5f286b1157ed9594a74cb054eb66539, REOPEN/MOVE 2023-07-16 18:15:31,145 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-16 18:15:31,145 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-16 18:15:31,145 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-16 18:15:31,145 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-16 18:15:31,145 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-16 18:15:31,146 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=31 updating hbase:meta row=b5f286b1157ed9594a74cb054eb66539, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,43375,1689531323422 2023-07-16 18:15:31,146 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] procedure2.ProcedureExecutor(1029): Stored pid=32, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=63cbed7d5a556a760487d1ada473fcde, REOPEN/MOVE 2023-07-16 18:15:31,146 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminServer(345): Moving region 3816510628b0129bfdf2e5c6472c4c7b to RSGroup Group_testTableMoveTruncateAndDrop_1745006964 2023-07-16 18:15:31,146 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689531329810.b5f286b1157ed9594a74cb054eb66539.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689531331146"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689531331146"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689531331146"}]},"ts":"1689531331146"} 2023-07-16 18:15:31,146 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-16 18:15:31,146 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=33, ppid=29, state=RUNNABLE; CloseRegionProcedure 5ae9f31ecda082e767a2a62ef1f27f72, server=jenkins-hbase4.apache.org,44563,1689531327107}] 2023-07-16 18:15:31,148 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-16 18:15:31,148 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-16 18:15:31,148 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-16 18:15:31,148 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-16 18:15:31,148 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=32, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=63cbed7d5a556a760487d1ada473fcde, REOPEN/MOVE 2023-07-16 18:15:31,149 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=35, ppid=30, state=RUNNABLE; CloseRegionProcedure 2089fc7d9545c3ff9650ab50dcd21066, server=jenkins-hbase4.apache.org,43375,1689531323422}] 2023-07-16 18:15:31,150 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] procedure2.ProcedureExecutor(1029): Stored pid=34, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=3816510628b0129bfdf2e5c6472c4c7b, REOPEN/MOVE 2023-07-16 18:15:31,150 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminServer(286): Moving 5 region(s) to group Group_testTableMoveTruncateAndDrop_1745006964, current retry=0 2023-07-16 18:15:31,151 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=34, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=3816510628b0129bfdf2e5c6472c4c7b, REOPEN/MOVE 2023-07-16 18:15:31,150 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=32 updating hbase:meta row=63cbed7d5a556a760487d1ada473fcde, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,43375,1689531323422 2023-07-16 18:15:31,151 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689531329810.63cbed7d5a556a760487d1ada473fcde.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689531331149"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689531331149"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689531331149"}]},"ts":"1689531331149"} 2023-07-16 18:15:31,153 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=36, ppid=31, state=RUNNABLE; CloseRegionProcedure b5f286b1157ed9594a74cb054eb66539, server=jenkins-hbase4.apache.org,43375,1689531323422}] 2023-07-16 18:15:31,153 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=34 updating hbase:meta row=3816510628b0129bfdf2e5c6472c4c7b, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,44563,1689531327107 2023-07-16 18:15:31,153 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689531329810.3816510628b0129bfdf2e5c6472c4c7b.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689531331153"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689531331153"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689531331153"}]},"ts":"1689531331153"} 2023-07-16 18:15:31,155 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=37, ppid=32, state=RUNNABLE; CloseRegionProcedure 63cbed7d5a556a760487d1ada473fcde, server=jenkins-hbase4.apache.org,43375,1689531323422}] 2023-07-16 18:15:31,157 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=38, ppid=34, state=RUNNABLE; CloseRegionProcedure 3816510628b0129bfdf2e5c6472c4c7b, server=jenkins-hbase4.apache.org,44563,1689531327107}] 2023-07-16 18:15:31,302 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 5ae9f31ecda082e767a2a62ef1f27f72 2023-07-16 18:15:31,304 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 5ae9f31ecda082e767a2a62ef1f27f72, disabling compactions & flushes 2023-07-16 18:15:31,304 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689531329810.5ae9f31ecda082e767a2a62ef1f27f72. 2023-07-16 18:15:31,304 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689531329810.5ae9f31ecda082e767a2a62ef1f27f72. 2023-07-16 18:15:31,304 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689531329810.5ae9f31ecda082e767a2a62ef1f27f72. after waiting 0 ms 2023-07-16 18:15:31,304 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689531329810.5ae9f31ecda082e767a2a62ef1f27f72. 2023-07-16 18:15:31,307 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 2089fc7d9545c3ff9650ab50dcd21066 2023-07-16 18:15:31,308 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 2089fc7d9545c3ff9650ab50dcd21066, disabling compactions & flushes 2023-07-16 18:15:31,308 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689531329810.2089fc7d9545c3ff9650ab50dcd21066. 2023-07-16 18:15:31,308 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689531329810.2089fc7d9545c3ff9650ab50dcd21066. 2023-07-16 18:15:31,308 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689531329810.2089fc7d9545c3ff9650ab50dcd21066. after waiting 0 ms 2023-07-16 18:15:31,308 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689531329810.2089fc7d9545c3ff9650ab50dcd21066. 2023-07-16 18:15:31,318 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/Group_testTableMoveTruncateAndDrop/5ae9f31ecda082e767a2a62ef1f27f72/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-16 18:15:31,318 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/Group_testTableMoveTruncateAndDrop/2089fc7d9545c3ff9650ab50dcd21066/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-16 18:15:31,321 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689531329810.5ae9f31ecda082e767a2a62ef1f27f72. 2023-07-16 18:15:31,322 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 5ae9f31ecda082e767a2a62ef1f27f72: 2023-07-16 18:15:31,322 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 5ae9f31ecda082e767a2a62ef1f27f72 move to jenkins-hbase4.apache.org,41927,1689531323590 record at close sequenceid=2 2023-07-16 18:15:31,322 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689531329810.2089fc7d9545c3ff9650ab50dcd21066. 2023-07-16 18:15:31,322 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 2089fc7d9545c3ff9650ab50dcd21066: 2023-07-16 18:15:31,322 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 2089fc7d9545c3ff9650ab50dcd21066 move to jenkins-hbase4.apache.org,33809,1689531323219 record at close sequenceid=2 2023-07-16 18:15:31,337 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 2089fc7d9545c3ff9650ab50dcd21066 2023-07-16 18:15:31,337 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close b5f286b1157ed9594a74cb054eb66539 2023-07-16 18:15:31,337 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=30 updating hbase:meta row=2089fc7d9545c3ff9650ab50dcd21066, regionState=CLOSED 2023-07-16 18:15:31,339 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing b5f286b1157ed9594a74cb054eb66539, disabling compactions & flushes 2023-07-16 18:15:31,340 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689531329810.b5f286b1157ed9594a74cb054eb66539. 2023-07-16 18:15:31,340 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689531329810.2089fc7d9545c3ff9650ab50dcd21066.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689531331337"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689531331337"}]},"ts":"1689531331337"} 2023-07-16 18:15:31,340 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689531329810.b5f286b1157ed9594a74cb054eb66539. 2023-07-16 18:15:31,341 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689531329810.b5f286b1157ed9594a74cb054eb66539. after waiting 0 ms 2023-07-16 18:15:31,341 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689531329810.b5f286b1157ed9594a74cb054eb66539. 2023-07-16 18:15:31,341 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 5ae9f31ecda082e767a2a62ef1f27f72 2023-07-16 18:15:31,341 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 3816510628b0129bfdf2e5c6472c4c7b 2023-07-16 18:15:31,342 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=29 updating hbase:meta row=5ae9f31ecda082e767a2a62ef1f27f72, regionState=CLOSED 2023-07-16 18:15:31,342 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689531329810.5ae9f31ecda082e767a2a62ef1f27f72.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689531331342"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689531331342"}]},"ts":"1689531331342"} 2023-07-16 18:15:31,347 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 3816510628b0129bfdf2e5c6472c4c7b, disabling compactions & flushes 2023-07-16 18:15:31,347 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689531329810.3816510628b0129bfdf2e5c6472c4c7b. 2023-07-16 18:15:31,347 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689531329810.3816510628b0129bfdf2e5c6472c4c7b. 2023-07-16 18:15:31,347 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689531329810.3816510628b0129bfdf2e5c6472c4c7b. after waiting 0 ms 2023-07-16 18:15:31,347 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689531329810.3816510628b0129bfdf2e5c6472c4c7b. 2023-07-16 18:15:31,348 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-16 18:15:31,353 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=33, resume processing ppid=29 2023-07-16 18:15:31,353 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=33, ppid=29, state=SUCCESS; CloseRegionProcedure 5ae9f31ecda082e767a2a62ef1f27f72, server=jenkins-hbase4.apache.org,44563,1689531327107 in 198 msec 2023-07-16 18:15:31,353 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=35, resume processing ppid=30 2023-07-16 18:15:31,353 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=35, ppid=30, state=SUCCESS; CloseRegionProcedure 2089fc7d9545c3ff9650ab50dcd21066, server=jenkins-hbase4.apache.org,43375,1689531323422 in 195 msec 2023-07-16 18:15:31,355 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=29, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=5ae9f31ecda082e767a2a62ef1f27f72, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,41927,1689531323590; forceNewPlan=false, retain=false 2023-07-16 18:15:31,356 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=30, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=2089fc7d9545c3ff9650ab50dcd21066, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,33809,1689531323219; forceNewPlan=false, retain=false 2023-07-16 18:15:31,398 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/Group_testTableMoveTruncateAndDrop/3816510628b0129bfdf2e5c6472c4c7b/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-16 18:15:31,400 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689531329810.3816510628b0129bfdf2e5c6472c4c7b. 2023-07-16 18:15:31,400 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 3816510628b0129bfdf2e5c6472c4c7b: 2023-07-16 18:15:31,401 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 3816510628b0129bfdf2e5c6472c4c7b move to jenkins-hbase4.apache.org,41927,1689531323590 record at close sequenceid=2 2023-07-16 18:15:31,401 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/Group_testTableMoveTruncateAndDrop/b5f286b1157ed9594a74cb054eb66539/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-16 18:15:31,403 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689531329810.b5f286b1157ed9594a74cb054eb66539. 2023-07-16 18:15:31,404 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for b5f286b1157ed9594a74cb054eb66539: 2023-07-16 18:15:31,404 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding b5f286b1157ed9594a74cb054eb66539 move to jenkins-hbase4.apache.org,41927,1689531323590 record at close sequenceid=2 2023-07-16 18:15:31,405 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 3816510628b0129bfdf2e5c6472c4c7b 2023-07-16 18:15:31,415 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=34 updating hbase:meta row=3816510628b0129bfdf2e5c6472c4c7b, regionState=CLOSED 2023-07-16 18:15:31,416 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689531329810.3816510628b0129bfdf2e5c6472c4c7b.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689531331415"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689531331415"}]},"ts":"1689531331415"} 2023-07-16 18:15:31,417 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed b5f286b1157ed9594a74cb054eb66539 2023-07-16 18:15:31,417 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 63cbed7d5a556a760487d1ada473fcde 2023-07-16 18:15:31,418 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 63cbed7d5a556a760487d1ada473fcde, disabling compactions & flushes 2023-07-16 18:15:31,418 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689531329810.63cbed7d5a556a760487d1ada473fcde. 2023-07-16 18:15:31,418 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689531329810.63cbed7d5a556a760487d1ada473fcde. 2023-07-16 18:15:31,418 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689531329810.63cbed7d5a556a760487d1ada473fcde. after waiting 0 ms 2023-07-16 18:15:31,418 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689531329810.63cbed7d5a556a760487d1ada473fcde. 2023-07-16 18:15:31,420 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=31 updating hbase:meta row=b5f286b1157ed9594a74cb054eb66539, regionState=CLOSED 2023-07-16 18:15:31,420 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689531329810.b5f286b1157ed9594a74cb054eb66539.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689531331420"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689531331420"}]},"ts":"1689531331420"} 2023-07-16 18:15:31,429 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/Group_testTableMoveTruncateAndDrop/63cbed7d5a556a760487d1ada473fcde/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-16 18:15:31,431 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689531329810.63cbed7d5a556a760487d1ada473fcde. 2023-07-16 18:15:31,431 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 63cbed7d5a556a760487d1ada473fcde: 2023-07-16 18:15:31,431 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 63cbed7d5a556a760487d1ada473fcde move to jenkins-hbase4.apache.org,41927,1689531323590 record at close sequenceid=2 2023-07-16 18:15:31,433 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=38, resume processing ppid=34 2023-07-16 18:15:31,433 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=38, ppid=34, state=SUCCESS; CloseRegionProcedure 3816510628b0129bfdf2e5c6472c4c7b, server=jenkins-hbase4.apache.org,44563,1689531327107 in 262 msec 2023-07-16 18:15:31,434 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 63cbed7d5a556a760487d1ada473fcde 2023-07-16 18:15:31,435 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=34, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=3816510628b0129bfdf2e5c6472c4c7b, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,41927,1689531323590; forceNewPlan=false, retain=false 2023-07-16 18:15:31,435 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=36, resume processing ppid=31 2023-07-16 18:15:31,435 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=32 updating hbase:meta row=63cbed7d5a556a760487d1ada473fcde, regionState=CLOSED 2023-07-16 18:15:31,435 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689531329810.63cbed7d5a556a760487d1ada473fcde.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689531331435"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689531331435"}]},"ts":"1689531331435"} 2023-07-16 18:15:31,435 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=36, ppid=31, state=SUCCESS; CloseRegionProcedure b5f286b1157ed9594a74cb054eb66539, server=jenkins-hbase4.apache.org,43375,1689531323422 in 278 msec 2023-07-16 18:15:31,438 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=31, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b5f286b1157ed9594a74cb054eb66539, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,41927,1689531323590; forceNewPlan=false, retain=false 2023-07-16 18:15:31,441 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=37, resume processing ppid=32 2023-07-16 18:15:31,441 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=37, ppid=32, state=SUCCESS; CloseRegionProcedure 63cbed7d5a556a760487d1ada473fcde, server=jenkins-hbase4.apache.org,43375,1689531323422 in 283 msec 2023-07-16 18:15:31,442 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=32, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=63cbed7d5a556a760487d1ada473fcde, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,41927,1689531323590; forceNewPlan=false, retain=false 2023-07-16 18:15:31,471 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-07-16 18:15:31,473 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-16 18:15:31,473 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver Metrics about HBase MasterObservers 2023-07-16 18:15:31,473 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-16 18:15:31,473 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint Metrics about HBase RegionObservers 2023-07-16 18:15:31,474 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-16 18:15:31,474 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint Metrics about HBase MasterObservers 2023-07-16 18:15:31,505 INFO [jenkins-hbase4:45445] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-16 18:15:31,507 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=31 updating hbase:meta row=b5f286b1157ed9594a74cb054eb66539, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41927,1689531323590 2023-07-16 18:15:31,507 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=30 updating hbase:meta row=2089fc7d9545c3ff9650ab50dcd21066, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,33809,1689531323219 2023-07-16 18:15:31,507 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=34 updating hbase:meta row=3816510628b0129bfdf2e5c6472c4c7b, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41927,1689531323590 2023-07-16 18:15:31,507 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=32 updating hbase:meta row=63cbed7d5a556a760487d1ada473fcde, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41927,1689531323590 2023-07-16 18:15:31,507 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689531329810.2089fc7d9545c3ff9650ab50dcd21066.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689531331507"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689531331507"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689531331507"}]},"ts":"1689531331507"} 2023-07-16 18:15:31,507 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689531329810.63cbed7d5a556a760487d1ada473fcde.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689531331507"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689531331507"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689531331507"}]},"ts":"1689531331507"} 2023-07-16 18:15:31,507 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=29 updating hbase:meta row=5ae9f31ecda082e767a2a62ef1f27f72, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41927,1689531323590 2023-07-16 18:15:31,507 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689531329810.5ae9f31ecda082e767a2a62ef1f27f72.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689531331507"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689531331507"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689531331507"}]},"ts":"1689531331507"} 2023-07-16 18:15:31,507 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689531329810.3816510628b0129bfdf2e5c6472c4c7b.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689531331507"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689531331507"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689531331507"}]},"ts":"1689531331507"} 2023-07-16 18:15:31,507 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689531329810.b5f286b1157ed9594a74cb054eb66539.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689531331507"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689531331507"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689531331507"}]},"ts":"1689531331507"} 2023-07-16 18:15:31,510 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=39, ppid=30, state=RUNNABLE; OpenRegionProcedure 2089fc7d9545c3ff9650ab50dcd21066, server=jenkins-hbase4.apache.org,33809,1689531323219}] 2023-07-16 18:15:31,511 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=40, ppid=32, state=RUNNABLE; OpenRegionProcedure 63cbed7d5a556a760487d1ada473fcde, server=jenkins-hbase4.apache.org,41927,1689531323590}] 2023-07-16 18:15:31,513 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=41, ppid=29, state=RUNNABLE; OpenRegionProcedure 5ae9f31ecda082e767a2a62ef1f27f72, server=jenkins-hbase4.apache.org,41927,1689531323590}] 2023-07-16 18:15:31,517 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=42, ppid=34, state=RUNNABLE; OpenRegionProcedure 3816510628b0129bfdf2e5c6472c4c7b, server=jenkins-hbase4.apache.org,41927,1689531323590}] 2023-07-16 18:15:31,519 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=43, ppid=31, state=RUNNABLE; OpenRegionProcedure b5f286b1157ed9594a74cb054eb66539, server=jenkins-hbase4.apache.org,41927,1689531323590}] 2023-07-16 18:15:31,671 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,aaaaa,1689531329810.2089fc7d9545c3ff9650ab50dcd21066. 2023-07-16 18:15:31,671 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689531329810.b5f286b1157ed9594a74cb054eb66539. 2023-07-16 18:15:31,672 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 2089fc7d9545c3ff9650ab50dcd21066, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689531329810.2089fc7d9545c3ff9650ab50dcd21066.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-16 18:15:31,672 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => b5f286b1157ed9594a74cb054eb66539, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689531329810.b5f286b1157ed9594a74cb054eb66539.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-16 18:15:31,672 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop b5f286b1157ed9594a74cb054eb66539 2023-07-16 18:15:31,672 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 2089fc7d9545c3ff9650ab50dcd21066 2023-07-16 18:15:31,672 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689531329810.b5f286b1157ed9594a74cb054eb66539.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 18:15:31,672 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689531329810.2089fc7d9545c3ff9650ab50dcd21066.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 18:15:31,672 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for b5f286b1157ed9594a74cb054eb66539 2023-07-16 18:15:31,672 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for b5f286b1157ed9594a74cb054eb66539 2023-07-16 18:15:31,672 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 2089fc7d9545c3ff9650ab50dcd21066 2023-07-16 18:15:31,672 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 2089fc7d9545c3ff9650ab50dcd21066 2023-07-16 18:15:31,674 INFO [StoreOpener-b5f286b1157ed9594a74cb054eb66539-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region b5f286b1157ed9594a74cb054eb66539 2023-07-16 18:15:31,680 INFO [StoreOpener-2089fc7d9545c3ff9650ab50dcd21066-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 2089fc7d9545c3ff9650ab50dcd21066 2023-07-16 18:15:31,681 DEBUG [StoreOpener-b5f286b1157ed9594a74cb054eb66539-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/Group_testTableMoveTruncateAndDrop/b5f286b1157ed9594a74cb054eb66539/f 2023-07-16 18:15:31,681 DEBUG [StoreOpener-b5f286b1157ed9594a74cb054eb66539-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/Group_testTableMoveTruncateAndDrop/b5f286b1157ed9594a74cb054eb66539/f 2023-07-16 18:15:31,681 INFO [StoreOpener-b5f286b1157ed9594a74cb054eb66539-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region b5f286b1157ed9594a74cb054eb66539 columnFamilyName f 2023-07-16 18:15:31,681 DEBUG [StoreOpener-2089fc7d9545c3ff9650ab50dcd21066-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/Group_testTableMoveTruncateAndDrop/2089fc7d9545c3ff9650ab50dcd21066/f 2023-07-16 18:15:31,681 DEBUG [StoreOpener-2089fc7d9545c3ff9650ab50dcd21066-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/Group_testTableMoveTruncateAndDrop/2089fc7d9545c3ff9650ab50dcd21066/f 2023-07-16 18:15:31,682 INFO [StoreOpener-2089fc7d9545c3ff9650ab50dcd21066-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 2089fc7d9545c3ff9650ab50dcd21066 columnFamilyName f 2023-07-16 18:15:31,682 INFO [StoreOpener-b5f286b1157ed9594a74cb054eb66539-1] regionserver.HStore(310): Store=b5f286b1157ed9594a74cb054eb66539/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 18:15:31,683 INFO [StoreOpener-2089fc7d9545c3ff9650ab50dcd21066-1] regionserver.HStore(310): Store=2089fc7d9545c3ff9650ab50dcd21066/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 18:15:31,683 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/Group_testTableMoveTruncateAndDrop/b5f286b1157ed9594a74cb054eb66539 2023-07-16 18:15:31,684 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/Group_testTableMoveTruncateAndDrop/2089fc7d9545c3ff9650ab50dcd21066 2023-07-16 18:15:31,685 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/Group_testTableMoveTruncateAndDrop/2089fc7d9545c3ff9650ab50dcd21066 2023-07-16 18:15:31,685 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/Group_testTableMoveTruncateAndDrop/b5f286b1157ed9594a74cb054eb66539 2023-07-16 18:15:31,689 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 2089fc7d9545c3ff9650ab50dcd21066 2023-07-16 18:15:31,690 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for b5f286b1157ed9594a74cb054eb66539 2023-07-16 18:15:31,691 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 2089fc7d9545c3ff9650ab50dcd21066; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10908328160, jitterRate=0.015917226672172546}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 18:15:31,691 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 2089fc7d9545c3ff9650ab50dcd21066: 2023-07-16 18:15:31,691 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened b5f286b1157ed9594a74cb054eb66539; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9683353600, jitterRate=-0.09816741943359375}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 18:15:31,691 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for b5f286b1157ed9594a74cb054eb66539: 2023-07-16 18:15:31,692 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,aaaaa,1689531329810.2089fc7d9545c3ff9650ab50dcd21066., pid=39, masterSystemTime=1689531331663 2023-07-16 18:15:31,692 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689531329810.b5f286b1157ed9594a74cb054eb66539., pid=43, masterSystemTime=1689531331664 2023-07-16 18:15:31,694 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,aaaaa,1689531329810.2089fc7d9545c3ff9650ab50dcd21066. 2023-07-16 18:15:31,694 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,aaaaa,1689531329810.2089fc7d9545c3ff9650ab50dcd21066. 2023-07-16 18:15:31,695 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=30 updating hbase:meta row=2089fc7d9545c3ff9650ab50dcd21066, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,33809,1689531323219 2023-07-16 18:15:31,696 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689531329810.b5f286b1157ed9594a74cb054eb66539. 2023-07-16 18:15:31,696 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689531329810.2089fc7d9545c3ff9650ab50dcd21066.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689531331695"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689531331695"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689531331695"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689531331695"}]},"ts":"1689531331695"} 2023-07-16 18:15:31,696 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689531329810.b5f286b1157ed9594a74cb054eb66539. 2023-07-16 18:15:31,696 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,,1689531329810.5ae9f31ecda082e767a2a62ef1f27f72. 2023-07-16 18:15:31,696 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 5ae9f31ecda082e767a2a62ef1f27f72, NAME => 'Group_testTableMoveTruncateAndDrop,,1689531329810.5ae9f31ecda082e767a2a62ef1f27f72.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-16 18:15:31,696 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=31 updating hbase:meta row=b5f286b1157ed9594a74cb054eb66539, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,41927,1689531323590 2023-07-16 18:15:31,697 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689531329810.b5f286b1157ed9594a74cb054eb66539.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689531331696"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689531331696"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689531331696"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689531331696"}]},"ts":"1689531331696"} 2023-07-16 18:15:31,697 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 5ae9f31ecda082e767a2a62ef1f27f72 2023-07-16 18:15:31,697 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689531329810.5ae9f31ecda082e767a2a62ef1f27f72.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 18:15:31,697 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 5ae9f31ecda082e767a2a62ef1f27f72 2023-07-16 18:15:31,697 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 5ae9f31ecda082e767a2a62ef1f27f72 2023-07-16 18:15:31,699 INFO [StoreOpener-5ae9f31ecda082e767a2a62ef1f27f72-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 5ae9f31ecda082e767a2a62ef1f27f72 2023-07-16 18:15:31,703 DEBUG [StoreOpener-5ae9f31ecda082e767a2a62ef1f27f72-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/Group_testTableMoveTruncateAndDrop/5ae9f31ecda082e767a2a62ef1f27f72/f 2023-07-16 18:15:31,703 DEBUG [StoreOpener-5ae9f31ecda082e767a2a62ef1f27f72-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/Group_testTableMoveTruncateAndDrop/5ae9f31ecda082e767a2a62ef1f27f72/f 2023-07-16 18:15:31,704 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=39, resume processing ppid=30 2023-07-16 18:15:31,704 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=39, ppid=30, state=SUCCESS; OpenRegionProcedure 2089fc7d9545c3ff9650ab50dcd21066, server=jenkins-hbase4.apache.org,33809,1689531323219 in 188 msec 2023-07-16 18:15:31,704 INFO [StoreOpener-5ae9f31ecda082e767a2a62ef1f27f72-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 5ae9f31ecda082e767a2a62ef1f27f72 columnFamilyName f 2023-07-16 18:15:31,705 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=43, resume processing ppid=31 2023-07-16 18:15:31,705 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=43, ppid=31, state=SUCCESS; OpenRegionProcedure b5f286b1157ed9594a74cb054eb66539, server=jenkins-hbase4.apache.org,41927,1689531323590 in 181 msec 2023-07-16 18:15:31,705 INFO [StoreOpener-5ae9f31ecda082e767a2a62ef1f27f72-1] regionserver.HStore(310): Store=5ae9f31ecda082e767a2a62ef1f27f72/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 18:15:31,706 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=30, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=2089fc7d9545c3ff9650ab50dcd21066, REOPEN/MOVE in 565 msec 2023-07-16 18:15:31,707 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/Group_testTableMoveTruncateAndDrop/5ae9f31ecda082e767a2a62ef1f27f72 2023-07-16 18:15:31,707 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=31, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b5f286b1157ed9594a74cb054eb66539, REOPEN/MOVE in 563 msec 2023-07-16 18:15:31,709 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/Group_testTableMoveTruncateAndDrop/5ae9f31ecda082e767a2a62ef1f27f72 2023-07-16 18:15:31,713 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 5ae9f31ecda082e767a2a62ef1f27f72 2023-07-16 18:15:31,714 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 5ae9f31ecda082e767a2a62ef1f27f72; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10240543200, jitterRate=-0.04627509415149689}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 18:15:31,714 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 5ae9f31ecda082e767a2a62ef1f27f72: 2023-07-16 18:15:31,715 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,,1689531329810.5ae9f31ecda082e767a2a62ef1f27f72., pid=41, masterSystemTime=1689531331664 2023-07-16 18:15:31,717 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,,1689531329810.5ae9f31ecda082e767a2a62ef1f27f72. 2023-07-16 18:15:31,717 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,,1689531329810.5ae9f31ecda082e767a2a62ef1f27f72. 2023-07-16 18:15:31,717 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,zzzzz,1689531329810.3816510628b0129bfdf2e5c6472c4c7b. 2023-07-16 18:15:31,717 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 3816510628b0129bfdf2e5c6472c4c7b, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689531329810.3816510628b0129bfdf2e5c6472c4c7b.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-16 18:15:31,718 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 3816510628b0129bfdf2e5c6472c4c7b 2023-07-16 18:15:31,718 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=29 updating hbase:meta row=5ae9f31ecda082e767a2a62ef1f27f72, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,41927,1689531323590 2023-07-16 18:15:31,718 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689531329810.3816510628b0129bfdf2e5c6472c4c7b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 18:15:31,718 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 3816510628b0129bfdf2e5c6472c4c7b 2023-07-16 18:15:31,718 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 3816510628b0129bfdf2e5c6472c4c7b 2023-07-16 18:15:31,718 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,,1689531329810.5ae9f31ecda082e767a2a62ef1f27f72.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689531331718"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689531331718"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689531331718"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689531331718"}]},"ts":"1689531331718"} 2023-07-16 18:15:31,719 INFO [StoreOpener-3816510628b0129bfdf2e5c6472c4c7b-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 3816510628b0129bfdf2e5c6472c4c7b 2023-07-16 18:15:31,721 DEBUG [StoreOpener-3816510628b0129bfdf2e5c6472c4c7b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/Group_testTableMoveTruncateAndDrop/3816510628b0129bfdf2e5c6472c4c7b/f 2023-07-16 18:15:31,721 DEBUG [StoreOpener-3816510628b0129bfdf2e5c6472c4c7b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/Group_testTableMoveTruncateAndDrop/3816510628b0129bfdf2e5c6472c4c7b/f 2023-07-16 18:15:31,721 INFO [StoreOpener-3816510628b0129bfdf2e5c6472c4c7b-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 3816510628b0129bfdf2e5c6472c4c7b columnFamilyName f 2023-07-16 18:15:31,722 INFO [StoreOpener-3816510628b0129bfdf2e5c6472c4c7b-1] regionserver.HStore(310): Store=3816510628b0129bfdf2e5c6472c4c7b/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 18:15:31,722 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=41, resume processing ppid=29 2023-07-16 18:15:31,723 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=41, ppid=29, state=SUCCESS; OpenRegionProcedure 5ae9f31ecda082e767a2a62ef1f27f72, server=jenkins-hbase4.apache.org,41927,1689531323590 in 207 msec 2023-07-16 18:15:31,723 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/Group_testTableMoveTruncateAndDrop/3816510628b0129bfdf2e5c6472c4c7b 2023-07-16 18:15:31,725 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=29, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=5ae9f31ecda082e767a2a62ef1f27f72, REOPEN/MOVE in 588 msec 2023-07-16 18:15:31,725 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/Group_testTableMoveTruncateAndDrop/3816510628b0129bfdf2e5c6472c4c7b 2023-07-16 18:15:31,729 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 3816510628b0129bfdf2e5c6472c4c7b 2023-07-16 18:15:31,730 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 3816510628b0129bfdf2e5c6472c4c7b; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10857061440, jitterRate=0.011142641305923462}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 18:15:31,730 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 3816510628b0129bfdf2e5c6472c4c7b: 2023-07-16 18:15:31,731 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,zzzzz,1689531329810.3816510628b0129bfdf2e5c6472c4c7b., pid=42, masterSystemTime=1689531331664 2023-07-16 18:15:31,732 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,zzzzz,1689531329810.3816510628b0129bfdf2e5c6472c4c7b. 2023-07-16 18:15:31,733 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,zzzzz,1689531329810.3816510628b0129bfdf2e5c6472c4c7b. 2023-07-16 18:15:31,733 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689531329810.63cbed7d5a556a760487d1ada473fcde. 2023-07-16 18:15:31,733 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 63cbed7d5a556a760487d1ada473fcde, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689531329810.63cbed7d5a556a760487d1ada473fcde.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-16 18:15:31,733 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=34 updating hbase:meta row=3816510628b0129bfdf2e5c6472c4c7b, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,41927,1689531323590 2023-07-16 18:15:31,733 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 63cbed7d5a556a760487d1ada473fcde 2023-07-16 18:15:31,733 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689531329810.3816510628b0129bfdf2e5c6472c4c7b.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689531331733"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689531331733"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689531331733"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689531331733"}]},"ts":"1689531331733"} 2023-07-16 18:15:31,733 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689531329810.63cbed7d5a556a760487d1ada473fcde.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 18:15:31,734 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 63cbed7d5a556a760487d1ada473fcde 2023-07-16 18:15:31,734 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 63cbed7d5a556a760487d1ada473fcde 2023-07-16 18:15:31,735 INFO [StoreOpener-63cbed7d5a556a760487d1ada473fcde-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 63cbed7d5a556a760487d1ada473fcde 2023-07-16 18:15:31,737 DEBUG [StoreOpener-63cbed7d5a556a760487d1ada473fcde-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/Group_testTableMoveTruncateAndDrop/63cbed7d5a556a760487d1ada473fcde/f 2023-07-16 18:15:31,737 DEBUG [StoreOpener-63cbed7d5a556a760487d1ada473fcde-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/Group_testTableMoveTruncateAndDrop/63cbed7d5a556a760487d1ada473fcde/f 2023-07-16 18:15:31,737 INFO [StoreOpener-63cbed7d5a556a760487d1ada473fcde-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 63cbed7d5a556a760487d1ada473fcde columnFamilyName f 2023-07-16 18:15:31,738 INFO [StoreOpener-63cbed7d5a556a760487d1ada473fcde-1] regionserver.HStore(310): Store=63cbed7d5a556a760487d1ada473fcde/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 18:15:31,739 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=42, resume processing ppid=34 2023-07-16 18:15:31,739 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=42, ppid=34, state=SUCCESS; OpenRegionProcedure 3816510628b0129bfdf2e5c6472c4c7b, server=jenkins-hbase4.apache.org,41927,1689531323590 in 219 msec 2023-07-16 18:15:31,739 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/Group_testTableMoveTruncateAndDrop/63cbed7d5a556a760487d1ada473fcde 2023-07-16 18:15:31,741 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/Group_testTableMoveTruncateAndDrop/63cbed7d5a556a760487d1ada473fcde 2023-07-16 18:15:31,745 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=34, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=3816510628b0129bfdf2e5c6472c4c7b, REOPEN/MOVE in 591 msec 2023-07-16 18:15:31,746 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 63cbed7d5a556a760487d1ada473fcde 2023-07-16 18:15:31,747 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 63cbed7d5a556a760487d1ada473fcde; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11239150080, jitterRate=0.04672741889953613}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 18:15:31,747 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 63cbed7d5a556a760487d1ada473fcde: 2023-07-16 18:15:31,748 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689531329810.63cbed7d5a556a760487d1ada473fcde., pid=40, masterSystemTime=1689531331664 2023-07-16 18:15:31,750 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689531329810.63cbed7d5a556a760487d1ada473fcde. 2023-07-16 18:15:31,751 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689531329810.63cbed7d5a556a760487d1ada473fcde. 2023-07-16 18:15:31,751 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=32 updating hbase:meta row=63cbed7d5a556a760487d1ada473fcde, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,41927,1689531323590 2023-07-16 18:15:31,751 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689531329810.63cbed7d5a556a760487d1ada473fcde.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689531331751"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689531331751"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689531331751"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689531331751"}]},"ts":"1689531331751"} 2023-07-16 18:15:31,757 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=40, resume processing ppid=32 2023-07-16 18:15:31,757 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=40, ppid=32, state=SUCCESS; OpenRegionProcedure 63cbed7d5a556a760487d1ada473fcde, server=jenkins-hbase4.apache.org,41927,1689531323590 in 243 msec 2023-07-16 18:15:31,759 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=32, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=63cbed7d5a556a760487d1ada473fcde, REOPEN/MOVE in 612 msec 2023-07-16 18:15:32,151 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] procedure.ProcedureSyncWait(216): waitFor pid=29 2023-07-16 18:15:32,151 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testTableMoveTruncateAndDrop] moved to target group Group_testTableMoveTruncateAndDrop_1745006964. 2023-07-16 18:15:32,151 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 18:15:32,156 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 18:15:32,156 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 18:15:32,160 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=Group_testTableMoveTruncateAndDrop 2023-07-16 18:15:32,160 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-16 18:15:32,161 INFO [Listener at localhost/38073] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 18:15:32,168 INFO [Listener at localhost/38073] client.HBaseAdmin$15(890): Started disable of Group_testTableMoveTruncateAndDrop 2023-07-16 18:15:32,173 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testTableMoveTruncateAndDrop 2023-07-16 18:15:32,182 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] procedure2.ProcedureExecutor(1029): Stored pid=44, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-16 18:15:32,187 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689531332187"}]},"ts":"1689531332187"} 2023-07-16 18:15:32,189 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLING in hbase:meta 2023-07-16 18:15:32,189 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(1230): Checking to see if procedure is done pid=44 2023-07-16 18:15:32,191 INFO [PEWorker-2] procedure.DisableTableProcedure(293): Set Group_testTableMoveTruncateAndDrop to state=DISABLING 2023-07-16 18:15:32,196 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=45, ppid=44, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=5ae9f31ecda082e767a2a62ef1f27f72, UNASSIGN}, {pid=46, ppid=44, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=2089fc7d9545c3ff9650ab50dcd21066, UNASSIGN}, {pid=47, ppid=44, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b5f286b1157ed9594a74cb054eb66539, UNASSIGN}, {pid=48, ppid=44, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=63cbed7d5a556a760487d1ada473fcde, UNASSIGN}, {pid=49, ppid=44, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=3816510628b0129bfdf2e5c6472c4c7b, UNASSIGN}] 2023-07-16 18:15:32,199 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=49, ppid=44, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=3816510628b0129bfdf2e5c6472c4c7b, UNASSIGN 2023-07-16 18:15:32,200 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=47, ppid=44, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b5f286b1157ed9594a74cb054eb66539, UNASSIGN 2023-07-16 18:15:32,200 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=48, ppid=44, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=63cbed7d5a556a760487d1ada473fcde, UNASSIGN 2023-07-16 18:15:32,200 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=46, ppid=44, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=2089fc7d9545c3ff9650ab50dcd21066, UNASSIGN 2023-07-16 18:15:32,200 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=45, ppid=44, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=5ae9f31ecda082e767a2a62ef1f27f72, UNASSIGN 2023-07-16 18:15:32,201 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=49 updating hbase:meta row=3816510628b0129bfdf2e5c6472c4c7b, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41927,1689531323590 2023-07-16 18:15:32,201 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=48 updating hbase:meta row=63cbed7d5a556a760487d1ada473fcde, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41927,1689531323590 2023-07-16 18:15:32,201 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=47 updating hbase:meta row=b5f286b1157ed9594a74cb054eb66539, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41927,1689531323590 2023-07-16 18:15:32,201 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689531329810.3816510628b0129bfdf2e5c6472c4c7b.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689531332201"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689531332201"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689531332201"}]},"ts":"1689531332201"} 2023-07-16 18:15:32,201 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689531329810.63cbed7d5a556a760487d1ada473fcde.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689531332201"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689531332201"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689531332201"}]},"ts":"1689531332201"} 2023-07-16 18:15:32,201 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689531329810.b5f286b1157ed9594a74cb054eb66539.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689531332201"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689531332201"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689531332201"}]},"ts":"1689531332201"} 2023-07-16 18:15:32,202 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=46 updating hbase:meta row=2089fc7d9545c3ff9650ab50dcd21066, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,33809,1689531323219 2023-07-16 18:15:32,202 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=45 updating hbase:meta row=5ae9f31ecda082e767a2a62ef1f27f72, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41927,1689531323590 2023-07-16 18:15:32,202 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689531329810.2089fc7d9545c3ff9650ab50dcd21066.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689531332202"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689531332202"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689531332202"}]},"ts":"1689531332202"} 2023-07-16 18:15:32,202 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689531329810.5ae9f31ecda082e767a2a62ef1f27f72.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689531332202"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689531332202"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689531332202"}]},"ts":"1689531332202"} 2023-07-16 18:15:32,207 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=50, ppid=48, state=RUNNABLE; CloseRegionProcedure 63cbed7d5a556a760487d1ada473fcde, server=jenkins-hbase4.apache.org,41927,1689531323590}] 2023-07-16 18:15:32,208 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=51, ppid=47, state=RUNNABLE; CloseRegionProcedure b5f286b1157ed9594a74cb054eb66539, server=jenkins-hbase4.apache.org,41927,1689531323590}] 2023-07-16 18:15:32,210 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=52, ppid=49, state=RUNNABLE; CloseRegionProcedure 3816510628b0129bfdf2e5c6472c4c7b, server=jenkins-hbase4.apache.org,41927,1689531323590}] 2023-07-16 18:15:32,212 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=53, ppid=46, state=RUNNABLE; CloseRegionProcedure 2089fc7d9545c3ff9650ab50dcd21066, server=jenkins-hbase4.apache.org,33809,1689531323219}] 2023-07-16 18:15:32,213 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=54, ppid=45, state=RUNNABLE; CloseRegionProcedure 5ae9f31ecda082e767a2a62ef1f27f72, server=jenkins-hbase4.apache.org,41927,1689531323590}] 2023-07-16 18:15:32,291 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(1230): Checking to see if procedure is done pid=44 2023-07-16 18:15:32,363 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 5ae9f31ecda082e767a2a62ef1f27f72 2023-07-16 18:15:32,364 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 5ae9f31ecda082e767a2a62ef1f27f72, disabling compactions & flushes 2023-07-16 18:15:32,364 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689531329810.5ae9f31ecda082e767a2a62ef1f27f72. 2023-07-16 18:15:32,364 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689531329810.5ae9f31ecda082e767a2a62ef1f27f72. 2023-07-16 18:15:32,364 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689531329810.5ae9f31ecda082e767a2a62ef1f27f72. after waiting 0 ms 2023-07-16 18:15:32,364 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689531329810.5ae9f31ecda082e767a2a62ef1f27f72. 2023-07-16 18:15:32,367 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 2089fc7d9545c3ff9650ab50dcd21066 2023-07-16 18:15:32,368 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 2089fc7d9545c3ff9650ab50dcd21066, disabling compactions & flushes 2023-07-16 18:15:32,368 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689531329810.2089fc7d9545c3ff9650ab50dcd21066. 2023-07-16 18:15:32,368 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689531329810.2089fc7d9545c3ff9650ab50dcd21066. 2023-07-16 18:15:32,368 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689531329810.2089fc7d9545c3ff9650ab50dcd21066. after waiting 0 ms 2023-07-16 18:15:32,368 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689531329810.2089fc7d9545c3ff9650ab50dcd21066. 2023-07-16 18:15:32,372 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/Group_testTableMoveTruncateAndDrop/5ae9f31ecda082e767a2a62ef1f27f72/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-16 18:15:32,373 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689531329810.5ae9f31ecda082e767a2a62ef1f27f72. 2023-07-16 18:15:32,373 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 5ae9f31ecda082e767a2a62ef1f27f72: 2023-07-16 18:15:32,375 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/Group_testTableMoveTruncateAndDrop/2089fc7d9545c3ff9650ab50dcd21066/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-16 18:15:32,375 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 5ae9f31ecda082e767a2a62ef1f27f72 2023-07-16 18:15:32,375 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close b5f286b1157ed9594a74cb054eb66539 2023-07-16 18:15:32,376 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing b5f286b1157ed9594a74cb054eb66539, disabling compactions & flushes 2023-07-16 18:15:32,376 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689531329810.b5f286b1157ed9594a74cb054eb66539. 2023-07-16 18:15:32,376 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689531329810.b5f286b1157ed9594a74cb054eb66539. 2023-07-16 18:15:32,376 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689531329810.b5f286b1157ed9594a74cb054eb66539. after waiting 0 ms 2023-07-16 18:15:32,376 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689531329810.b5f286b1157ed9594a74cb054eb66539. 2023-07-16 18:15:32,376 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689531329810.2089fc7d9545c3ff9650ab50dcd21066. 2023-07-16 18:15:32,376 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=45 updating hbase:meta row=5ae9f31ecda082e767a2a62ef1f27f72, regionState=CLOSED 2023-07-16 18:15:32,376 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 2089fc7d9545c3ff9650ab50dcd21066: 2023-07-16 18:15:32,376 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689531329810.5ae9f31ecda082e767a2a62ef1f27f72.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689531332376"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689531332376"}]},"ts":"1689531332376"} 2023-07-16 18:15:32,379 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 2089fc7d9545c3ff9650ab50dcd21066 2023-07-16 18:15:32,379 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=46 updating hbase:meta row=2089fc7d9545c3ff9650ab50dcd21066, regionState=CLOSED 2023-07-16 18:15:32,380 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689531329810.2089fc7d9545c3ff9650ab50dcd21066.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689531332379"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689531332379"}]},"ts":"1689531332379"} 2023-07-16 18:15:32,381 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/Group_testTableMoveTruncateAndDrop/b5f286b1157ed9594a74cb054eb66539/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-16 18:15:32,382 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689531329810.b5f286b1157ed9594a74cb054eb66539. 2023-07-16 18:15:32,382 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for b5f286b1157ed9594a74cb054eb66539: 2023-07-16 18:15:32,383 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=54, resume processing ppid=45 2023-07-16 18:15:32,383 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=54, ppid=45, state=SUCCESS; CloseRegionProcedure 5ae9f31ecda082e767a2a62ef1f27f72, server=jenkins-hbase4.apache.org,41927,1689531323590 in 166 msec 2023-07-16 18:15:32,384 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed b5f286b1157ed9594a74cb054eb66539 2023-07-16 18:15:32,384 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 63cbed7d5a556a760487d1ada473fcde 2023-07-16 18:15:32,385 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 63cbed7d5a556a760487d1ada473fcde, disabling compactions & flushes 2023-07-16 18:15:32,385 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689531329810.63cbed7d5a556a760487d1ada473fcde. 2023-07-16 18:15:32,385 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689531329810.63cbed7d5a556a760487d1ada473fcde. 2023-07-16 18:15:32,385 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689531329810.63cbed7d5a556a760487d1ada473fcde. after waiting 0 ms 2023-07-16 18:15:32,385 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689531329810.63cbed7d5a556a760487d1ada473fcde. 2023-07-16 18:15:32,385 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=47 updating hbase:meta row=b5f286b1157ed9594a74cb054eb66539, regionState=CLOSED 2023-07-16 18:15:32,385 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689531329810.b5f286b1157ed9594a74cb054eb66539.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689531332385"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689531332385"}]},"ts":"1689531332385"} 2023-07-16 18:15:32,393 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=53, resume processing ppid=46 2023-07-16 18:15:32,393 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=53, ppid=46, state=SUCCESS; CloseRegionProcedure 2089fc7d9545c3ff9650ab50dcd21066, server=jenkins-hbase4.apache.org,33809,1689531323219 in 169 msec 2023-07-16 18:15:32,394 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=45, ppid=44, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=5ae9f31ecda082e767a2a62ef1f27f72, UNASSIGN in 190 msec 2023-07-16 18:15:32,395 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=46, ppid=44, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=2089fc7d9545c3ff9650ab50dcd21066, UNASSIGN in 200 msec 2023-07-16 18:15:32,396 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=51, resume processing ppid=47 2023-07-16 18:15:32,396 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=51, ppid=47, state=SUCCESS; CloseRegionProcedure b5f286b1157ed9594a74cb054eb66539, server=jenkins-hbase4.apache.org,41927,1689531323590 in 179 msec 2023-07-16 18:15:32,398 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/Group_testTableMoveTruncateAndDrop/63cbed7d5a556a760487d1ada473fcde/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-16 18:15:32,398 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=47, ppid=44, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b5f286b1157ed9594a74cb054eb66539, UNASSIGN in 203 msec 2023-07-16 18:15:32,398 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689531329810.63cbed7d5a556a760487d1ada473fcde. 2023-07-16 18:15:32,399 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 63cbed7d5a556a760487d1ada473fcde: 2023-07-16 18:15:32,400 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 63cbed7d5a556a760487d1ada473fcde 2023-07-16 18:15:32,400 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 3816510628b0129bfdf2e5c6472c4c7b 2023-07-16 18:15:32,402 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 3816510628b0129bfdf2e5c6472c4c7b, disabling compactions & flushes 2023-07-16 18:15:32,402 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689531329810.3816510628b0129bfdf2e5c6472c4c7b. 2023-07-16 18:15:32,402 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689531329810.3816510628b0129bfdf2e5c6472c4c7b. 2023-07-16 18:15:32,402 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=48 updating hbase:meta row=63cbed7d5a556a760487d1ada473fcde, regionState=CLOSED 2023-07-16 18:15:32,402 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689531329810.3816510628b0129bfdf2e5c6472c4c7b. after waiting 0 ms 2023-07-16 18:15:32,402 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689531329810.3816510628b0129bfdf2e5c6472c4c7b. 2023-07-16 18:15:32,402 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689531329810.63cbed7d5a556a760487d1ada473fcde.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689531332402"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689531332402"}]},"ts":"1689531332402"} 2023-07-16 18:15:32,406 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=50, resume processing ppid=48 2023-07-16 18:15:32,406 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=50, ppid=48, state=SUCCESS; CloseRegionProcedure 63cbed7d5a556a760487d1ada473fcde, server=jenkins-hbase4.apache.org,41927,1689531323590 in 197 msec 2023-07-16 18:15:32,407 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/Group_testTableMoveTruncateAndDrop/3816510628b0129bfdf2e5c6472c4c7b/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-16 18:15:32,408 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689531329810.3816510628b0129bfdf2e5c6472c4c7b. 2023-07-16 18:15:32,408 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 3816510628b0129bfdf2e5c6472c4c7b: 2023-07-16 18:15:32,408 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=48, ppid=44, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=63cbed7d5a556a760487d1ada473fcde, UNASSIGN in 213 msec 2023-07-16 18:15:32,410 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 3816510628b0129bfdf2e5c6472c4c7b 2023-07-16 18:15:32,410 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=49 updating hbase:meta row=3816510628b0129bfdf2e5c6472c4c7b, regionState=CLOSED 2023-07-16 18:15:32,411 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689531329810.3816510628b0129bfdf2e5c6472c4c7b.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689531332410"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689531332410"}]},"ts":"1689531332410"} 2023-07-16 18:15:32,415 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=52, resume processing ppid=49 2023-07-16 18:15:32,415 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=52, ppid=49, state=SUCCESS; CloseRegionProcedure 3816510628b0129bfdf2e5c6472c4c7b, server=jenkins-hbase4.apache.org,41927,1689531323590 in 203 msec 2023-07-16 18:15:32,420 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=49, resume processing ppid=44 2023-07-16 18:15:32,420 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=49, ppid=44, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=3816510628b0129bfdf2e5c6472c4c7b, UNASSIGN in 222 msec 2023-07-16 18:15:32,421 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689531332421"}]},"ts":"1689531332421"} 2023-07-16 18:15:32,423 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLED in hbase:meta 2023-07-16 18:15:32,426 INFO [PEWorker-4] procedure.DisableTableProcedure(305): Set Group_testTableMoveTruncateAndDrop to state=DISABLED 2023-07-16 18:15:32,430 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=44, state=SUCCESS; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop in 253 msec 2023-07-16 18:15:32,493 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(1230): Checking to see if procedure is done pid=44 2023-07-16 18:15:32,493 INFO [Listener at localhost/38073] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 44 completed 2023-07-16 18:15:32,494 INFO [Listener at localhost/38073] client.HBaseAdmin$13(770): Started truncating Group_testTableMoveTruncateAndDrop 2023-07-16 18:15:32,499 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.HMaster$6(2260): Client=jenkins//172.31.14.131 truncate Group_testTableMoveTruncateAndDrop 2023-07-16 18:15:32,507 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] procedure2.ProcedureExecutor(1029): Stored pid=55, state=RUNNABLE:TRUNCATE_TABLE_PRE_OPERATION; TruncateTableProcedure (table=Group_testTableMoveTruncateAndDrop preserveSplits=true) 2023-07-16 18:15:32,510 DEBUG [PEWorker-5] procedure.TruncateTableProcedure(87): waiting for 'Group_testTableMoveTruncateAndDrop' regions in transition 2023-07-16 18:15:32,512 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(1230): Checking to see if procedure is done pid=55 2023-07-16 18:15:32,524 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/default/Group_testTableMoveTruncateAndDrop/2089fc7d9545c3ff9650ab50dcd21066 2023-07-16 18:15:32,524 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/default/Group_testTableMoveTruncateAndDrop/3816510628b0129bfdf2e5c6472c4c7b 2023-07-16 18:15:32,524 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/default/Group_testTableMoveTruncateAndDrop/63cbed7d5a556a760487d1ada473fcde 2023-07-16 18:15:32,524 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/default/Group_testTableMoveTruncateAndDrop/5ae9f31ecda082e767a2a62ef1f27f72 2023-07-16 18:15:32,524 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b5f286b1157ed9594a74cb054eb66539 2023-07-16 18:15:32,529 DEBUG [HFileArchiver-2] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b5f286b1157ed9594a74cb054eb66539/f, FileablePath, hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b5f286b1157ed9594a74cb054eb66539/recovered.edits] 2023-07-16 18:15:32,529 DEBUG [HFileArchiver-7] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/default/Group_testTableMoveTruncateAndDrop/63cbed7d5a556a760487d1ada473fcde/f, FileablePath, hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/default/Group_testTableMoveTruncateAndDrop/63cbed7d5a556a760487d1ada473fcde/recovered.edits] 2023-07-16 18:15:32,529 DEBUG [HFileArchiver-1] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/default/Group_testTableMoveTruncateAndDrop/2089fc7d9545c3ff9650ab50dcd21066/f, FileablePath, hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/default/Group_testTableMoveTruncateAndDrop/2089fc7d9545c3ff9650ab50dcd21066/recovered.edits] 2023-07-16 18:15:32,529 DEBUG [HFileArchiver-3] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/default/Group_testTableMoveTruncateAndDrop/3816510628b0129bfdf2e5c6472c4c7b/f, FileablePath, hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/default/Group_testTableMoveTruncateAndDrop/3816510628b0129bfdf2e5c6472c4c7b/recovered.edits] 2023-07-16 18:15:32,533 DEBUG [HFileArchiver-8] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/default/Group_testTableMoveTruncateAndDrop/5ae9f31ecda082e767a2a62ef1f27f72/f, FileablePath, hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/default/Group_testTableMoveTruncateAndDrop/5ae9f31ecda082e767a2a62ef1f27f72/recovered.edits] 2023-07-16 18:15:32,545 DEBUG [HFileArchiver-3] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/default/Group_testTableMoveTruncateAndDrop/3816510628b0129bfdf2e5c6472c4c7b/recovered.edits/7.seqid to hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/archive/data/default/Group_testTableMoveTruncateAndDrop/3816510628b0129bfdf2e5c6472c4c7b/recovered.edits/7.seqid 2023-07-16 18:15:32,548 DEBUG [HFileArchiver-3] backup.HFileArchiver(596): Deleted hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/default/Group_testTableMoveTruncateAndDrop/3816510628b0129bfdf2e5c6472c4c7b 2023-07-16 18:15:32,548 DEBUG [HFileArchiver-2] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b5f286b1157ed9594a74cb054eb66539/recovered.edits/7.seqid to hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/archive/data/default/Group_testTableMoveTruncateAndDrop/b5f286b1157ed9594a74cb054eb66539/recovered.edits/7.seqid 2023-07-16 18:15:32,549 DEBUG [HFileArchiver-2] backup.HFileArchiver(596): Deleted hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b5f286b1157ed9594a74cb054eb66539 2023-07-16 18:15:32,552 DEBUG [HFileArchiver-7] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/default/Group_testTableMoveTruncateAndDrop/63cbed7d5a556a760487d1ada473fcde/recovered.edits/7.seqid to hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/archive/data/default/Group_testTableMoveTruncateAndDrop/63cbed7d5a556a760487d1ada473fcde/recovered.edits/7.seqid 2023-07-16 18:15:32,552 DEBUG [HFileArchiver-1] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/default/Group_testTableMoveTruncateAndDrop/2089fc7d9545c3ff9650ab50dcd21066/recovered.edits/7.seqid to hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/archive/data/default/Group_testTableMoveTruncateAndDrop/2089fc7d9545c3ff9650ab50dcd21066/recovered.edits/7.seqid 2023-07-16 18:15:32,554 DEBUG [HFileArchiver-7] backup.HFileArchiver(596): Deleted hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/default/Group_testTableMoveTruncateAndDrop/63cbed7d5a556a760487d1ada473fcde 2023-07-16 18:15:32,554 DEBUG [HFileArchiver-1] backup.HFileArchiver(596): Deleted hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/default/Group_testTableMoveTruncateAndDrop/2089fc7d9545c3ff9650ab50dcd21066 2023-07-16 18:15:32,555 DEBUG [HFileArchiver-8] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/default/Group_testTableMoveTruncateAndDrop/5ae9f31ecda082e767a2a62ef1f27f72/recovered.edits/7.seqid to hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/archive/data/default/Group_testTableMoveTruncateAndDrop/5ae9f31ecda082e767a2a62ef1f27f72/recovered.edits/7.seqid 2023-07-16 18:15:32,556 DEBUG [HFileArchiver-8] backup.HFileArchiver(596): Deleted hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/default/Group_testTableMoveTruncateAndDrop/5ae9f31ecda082e767a2a62ef1f27f72 2023-07-16 18:15:32,556 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-16 18:15:32,587 WARN [PEWorker-5] procedure.DeleteTableProcedure(384): Deleting some vestigial 5 rows of Group_testTableMoveTruncateAndDrop from hbase:meta 2023-07-16 18:15:32,599 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(421): Removing 'Group_testTableMoveTruncateAndDrop' descriptor. 2023-07-16 18:15:32,599 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(411): Removing 'Group_testTableMoveTruncateAndDrop' from region states. 2023-07-16 18:15:32,600 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,,1689531329810.5ae9f31ecda082e767a2a62ef1f27f72.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689531332600"}]},"ts":"9223372036854775807"} 2023-07-16 18:15:32,600 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689531329810.2089fc7d9545c3ff9650ab50dcd21066.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689531332600"}]},"ts":"9223372036854775807"} 2023-07-16 18:15:32,600 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689531329810.b5f286b1157ed9594a74cb054eb66539.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689531332600"}]},"ts":"9223372036854775807"} 2023-07-16 18:15:32,600 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689531329810.63cbed7d5a556a760487d1ada473fcde.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689531332600"}]},"ts":"9223372036854775807"} 2023-07-16 18:15:32,600 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689531329810.3816510628b0129bfdf2e5c6472c4c7b.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689531332600"}]},"ts":"9223372036854775807"} 2023-07-16 18:15:32,606 INFO [PEWorker-5] hbase.MetaTableAccessor(1788): Deleted 5 regions from META 2023-07-16 18:15:32,606 DEBUG [PEWorker-5] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 5ae9f31ecda082e767a2a62ef1f27f72, NAME => 'Group_testTableMoveTruncateAndDrop,,1689531329810.5ae9f31ecda082e767a2a62ef1f27f72.', STARTKEY => '', ENDKEY => 'aaaaa'}, {ENCODED => 2089fc7d9545c3ff9650ab50dcd21066, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689531329810.2089fc7d9545c3ff9650ab50dcd21066.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, {ENCODED => b5f286b1157ed9594a74cb054eb66539, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689531329810.b5f286b1157ed9594a74cb054eb66539.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, {ENCODED => 63cbed7d5a556a760487d1ada473fcde, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689531329810.63cbed7d5a556a760487d1ada473fcde.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, {ENCODED => 3816510628b0129bfdf2e5c6472c4c7b, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689531329810.3816510628b0129bfdf2e5c6472c4c7b.', STARTKEY => 'zzzzz', ENDKEY => ''}] 2023-07-16 18:15:32,606 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(415): Marking 'Group_testTableMoveTruncateAndDrop' as deleted. 2023-07-16 18:15:32,606 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689531332606"}]},"ts":"9223372036854775807"} 2023-07-16 18:15:32,609 INFO [PEWorker-5] hbase.MetaTableAccessor(1658): Deleted table Group_testTableMoveTruncateAndDrop state from META 2023-07-16 18:15:32,613 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(1230): Checking to see if procedure is done pid=55 2023-07-16 18:15:32,621 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/default/Group_testTableMoveTruncateAndDrop/50422474b5c8d14c069cdc6e45e54638 2023-07-16 18:15:32,621 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/default/Group_testTableMoveTruncateAndDrop/9e47427e6e7d83484bef64dc1523eabf 2023-07-16 18:15:32,621 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b7dd0f632e258203e2698c3fa6927d09 2023-07-16 18:15:32,621 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/default/Group_testTableMoveTruncateAndDrop/4301d965c4312d19ab4e01adf2acb533 2023-07-16 18:15:32,621 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/default/Group_testTableMoveTruncateAndDrop/35814aedff3426711613031993a15b0e 2023-07-16 18:15:32,623 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/default/Group_testTableMoveTruncateAndDrop/9e47427e6e7d83484bef64dc1523eabf empty. 2023-07-16 18:15:32,623 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/default/Group_testTableMoveTruncateAndDrop/35814aedff3426711613031993a15b0e empty. 2023-07-16 18:15:32,624 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/default/Group_testTableMoveTruncateAndDrop/4301d965c4312d19ab4e01adf2acb533 empty. 2023-07-16 18:15:32,624 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b7dd0f632e258203e2698c3fa6927d09 empty. 2023-07-16 18:15:32,624 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/default/Group_testTableMoveTruncateAndDrop/50422474b5c8d14c069cdc6e45e54638 empty. 2023-07-16 18:15:32,624 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/default/Group_testTableMoveTruncateAndDrop/4301d965c4312d19ab4e01adf2acb533 2023-07-16 18:15:32,624 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/default/Group_testTableMoveTruncateAndDrop/35814aedff3426711613031993a15b0e 2023-07-16 18:15:32,625 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/default/Group_testTableMoveTruncateAndDrop/9e47427e6e7d83484bef64dc1523eabf 2023-07-16 18:15:32,625 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/default/Group_testTableMoveTruncateAndDrop/50422474b5c8d14c069cdc6e45e54638 2023-07-16 18:15:32,625 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b7dd0f632e258203e2698c3fa6927d09 2023-07-16 18:15:32,625 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-16 18:15:32,663 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/default/Group_testTableMoveTruncateAndDrop/.tabledesc/.tableinfo.0000000001 2023-07-16 18:15:32,666 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(7675): creating {ENCODED => 4301d965c4312d19ab4e01adf2acb533, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689531332558.4301d965c4312d19ab4e01adf2acb533.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp 2023-07-16 18:15:32,666 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(7675): creating {ENCODED => 50422474b5c8d14c069cdc6e45e54638, NAME => 'Group_testTableMoveTruncateAndDrop,,1689531332558.50422474b5c8d14c069cdc6e45e54638.', STARTKEY => '', ENDKEY => 'aaaaa'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp 2023-07-16 18:15:32,666 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(7675): creating {ENCODED => b7dd0f632e258203e2698c3fa6927d09, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689531332558.b7dd0f632e258203e2698c3fa6927d09.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp 2023-07-16 18:15:32,729 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689531332558.b7dd0f632e258203e2698c3fa6927d09.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 18:15:32,729 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689531332558.50422474b5c8d14c069cdc6e45e54638.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 18:15:32,729 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689531332558.4301d965c4312d19ab4e01adf2acb533.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 18:15:32,729 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1604): Closing b7dd0f632e258203e2698c3fa6927d09, disabling compactions & flushes 2023-07-16 18:15:32,729 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1604): Closing 4301d965c4312d19ab4e01adf2acb533, disabling compactions & flushes 2023-07-16 18:15:32,729 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689531332558.b7dd0f632e258203e2698c3fa6927d09. 2023-07-16 18:15:32,729 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1604): Closing 50422474b5c8d14c069cdc6e45e54638, disabling compactions & flushes 2023-07-16 18:15:32,729 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689531332558.b7dd0f632e258203e2698c3fa6927d09. 2023-07-16 18:15:32,729 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689531332558.4301d965c4312d19ab4e01adf2acb533. 2023-07-16 18:15:32,730 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689531332558.4301d965c4312d19ab4e01adf2acb533. 2023-07-16 18:15:32,729 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689531332558.b7dd0f632e258203e2698c3fa6927d09. after waiting 0 ms 2023-07-16 18:15:32,730 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689531332558.4301d965c4312d19ab4e01adf2acb533. after waiting 0 ms 2023-07-16 18:15:32,729 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689531332558.50422474b5c8d14c069cdc6e45e54638. 2023-07-16 18:15:32,730 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689531332558.4301d965c4312d19ab4e01adf2acb533. 2023-07-16 18:15:32,730 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689531332558.b7dd0f632e258203e2698c3fa6927d09. 2023-07-16 18:15:32,730 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689531332558.4301d965c4312d19ab4e01adf2acb533. 2023-07-16 18:15:32,730 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689531332558.b7dd0f632e258203e2698c3fa6927d09. 2023-07-16 18:15:32,730 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689531332558.50422474b5c8d14c069cdc6e45e54638. 2023-07-16 18:15:32,730 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1558): Region close journal for b7dd0f632e258203e2698c3fa6927d09: 2023-07-16 18:15:32,730 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689531332558.50422474b5c8d14c069cdc6e45e54638. after waiting 0 ms 2023-07-16 18:15:32,730 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1558): Region close journal for 4301d965c4312d19ab4e01adf2acb533: 2023-07-16 18:15:32,730 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689531332558.50422474b5c8d14c069cdc6e45e54638. 2023-07-16 18:15:32,730 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689531332558.50422474b5c8d14c069cdc6e45e54638. 2023-07-16 18:15:32,730 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-0] regionserver.HRegion(1558): Region close journal for 50422474b5c8d14c069cdc6e45e54638: 2023-07-16 18:15:32,730 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(7675): creating {ENCODED => 35814aedff3426711613031993a15b0e, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689531332558.35814aedff3426711613031993a15b0e.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp 2023-07-16 18:15:32,731 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(7675): creating {ENCODED => 9e47427e6e7d83484bef64dc1523eabf, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689531332558.9e47427e6e7d83484bef64dc1523eabf.', STARTKEY => 'zzzzz', ENDKEY => ''}, tableDescriptor='Group_testTableMoveTruncateAndDrop', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp 2023-07-16 18:15:32,763 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689531332558.35814aedff3426711613031993a15b0e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 18:15:32,763 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1604): Closing 35814aedff3426711613031993a15b0e, disabling compactions & flushes 2023-07-16 18:15:32,763 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689531332558.35814aedff3426711613031993a15b0e. 2023-07-16 18:15:32,763 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689531332558.35814aedff3426711613031993a15b0e. 2023-07-16 18:15:32,764 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689531332558.35814aedff3426711613031993a15b0e. after waiting 0 ms 2023-07-16 18:15:32,764 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689531332558.35814aedff3426711613031993a15b0e. 2023-07-16 18:15:32,764 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689531332558.35814aedff3426711613031993a15b0e. 2023-07-16 18:15:32,764 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-2] regionserver.HRegion(1558): Region close journal for 35814aedff3426711613031993a15b0e: 2023-07-16 18:15:32,773 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689531332558.9e47427e6e7d83484bef64dc1523eabf.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 18:15:32,773 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1604): Closing 9e47427e6e7d83484bef64dc1523eabf, disabling compactions & flushes 2023-07-16 18:15:32,773 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689531332558.9e47427e6e7d83484bef64dc1523eabf. 2023-07-16 18:15:32,773 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689531332558.9e47427e6e7d83484bef64dc1523eabf. 2023-07-16 18:15:32,773 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689531332558.9e47427e6e7d83484bef64dc1523eabf. after waiting 0 ms 2023-07-16 18:15:32,773 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689531332558.9e47427e6e7d83484bef64dc1523eabf. 2023-07-16 18:15:32,773 INFO [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689531332558.9e47427e6e7d83484bef64dc1523eabf. 2023-07-16 18:15:32,773 DEBUG [RegionOpenAndInit-Group_testTableMoveTruncateAndDrop-pool-1] regionserver.HRegion(1558): Region close journal for 9e47427e6e7d83484bef64dc1523eabf: 2023-07-16 18:15:32,778 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689531332558.b7dd0f632e258203e2698c3fa6927d09.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689531332777"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689531332777"}]},"ts":"1689531332777"} 2023-07-16 18:15:32,778 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689531332558.4301d965c4312d19ab4e01adf2acb533.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689531332777"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689531332777"}]},"ts":"1689531332777"} 2023-07-16 18:15:32,778 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689531332558.50422474b5c8d14c069cdc6e45e54638.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689531332777"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689531332777"}]},"ts":"1689531332777"} 2023-07-16 18:15:32,778 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689531332558.35814aedff3426711613031993a15b0e.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689531332777"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689531332777"}]},"ts":"1689531332777"} 2023-07-16 18:15:32,778 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689531332558.9e47427e6e7d83484bef64dc1523eabf.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689531332777"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689531332777"}]},"ts":"1689531332777"} 2023-07-16 18:15:32,781 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 5 regions to meta. 2023-07-16 18:15:32,783 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689531332782"}]},"ts":"1689531332782"} 2023-07-16 18:15:32,785 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLING in hbase:meta 2023-07-16 18:15:32,789 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-16 18:15:32,789 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-16 18:15:32,789 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-16 18:15:32,789 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-16 18:15:32,790 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=56, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=50422474b5c8d14c069cdc6e45e54638, ASSIGN}, {pid=57, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=4301d965c4312d19ab4e01adf2acb533, ASSIGN}, {pid=58, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b7dd0f632e258203e2698c3fa6927d09, ASSIGN}, {pid=59, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=35814aedff3426711613031993a15b0e, ASSIGN}, {pid=60, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=9e47427e6e7d83484bef64dc1523eabf, ASSIGN}] 2023-07-16 18:15:32,792 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=56, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=50422474b5c8d14c069cdc6e45e54638, ASSIGN 2023-07-16 18:15:32,792 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=57, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=4301d965c4312d19ab4e01adf2acb533, ASSIGN 2023-07-16 18:15:32,792 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=60, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=9e47427e6e7d83484bef64dc1523eabf, ASSIGN 2023-07-16 18:15:32,792 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=59, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=35814aedff3426711613031993a15b0e, ASSIGN 2023-07-16 18:15:32,792 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=58, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b7dd0f632e258203e2698c3fa6927d09, ASSIGN 2023-07-16 18:15:32,793 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=56, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=50422474b5c8d14c069cdc6e45e54638, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,41927,1689531323590; forceNewPlan=false, retain=false 2023-07-16 18:15:32,793 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=60, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=9e47427e6e7d83484bef64dc1523eabf, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,41927,1689531323590; forceNewPlan=false, retain=false 2023-07-16 18:15:32,793 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=57, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=4301d965c4312d19ab4e01adf2acb533, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,33809,1689531323219; forceNewPlan=false, retain=false 2023-07-16 18:15:32,794 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=59, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=35814aedff3426711613031993a15b0e, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,33809,1689531323219; forceNewPlan=false, retain=false 2023-07-16 18:15:32,794 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=58, ppid=55, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b7dd0f632e258203e2698c3fa6927d09, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,41927,1689531323590; forceNewPlan=false, retain=false 2023-07-16 18:15:32,815 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(1230): Checking to see if procedure is done pid=55 2023-07-16 18:15:32,944 INFO [jenkins-hbase4:45445] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-16 18:15:32,947 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=60 updating hbase:meta row=9e47427e6e7d83484bef64dc1523eabf, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41927,1689531323590 2023-07-16 18:15:32,948 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=59 updating hbase:meta row=35814aedff3426711613031993a15b0e, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,33809,1689531323219 2023-07-16 18:15:32,948 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689531332558.9e47427e6e7d83484bef64dc1523eabf.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689531332947"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689531332947"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689531332947"}]},"ts":"1689531332947"} 2023-07-16 18:15:32,947 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=56 updating hbase:meta row=50422474b5c8d14c069cdc6e45e54638, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41927,1689531323590 2023-07-16 18:15:32,947 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=58 updating hbase:meta row=b7dd0f632e258203e2698c3fa6927d09, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41927,1689531323590 2023-07-16 18:15:32,948 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689531332558.50422474b5c8d14c069cdc6e45e54638.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689531332947"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689531332947"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689531332947"}]},"ts":"1689531332947"} 2023-07-16 18:15:32,947 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=57 updating hbase:meta row=4301d965c4312d19ab4e01adf2acb533, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,33809,1689531323219 2023-07-16 18:15:32,948 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689531332558.b7dd0f632e258203e2698c3fa6927d09.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689531332947"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689531332947"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689531332947"}]},"ts":"1689531332947"} 2023-07-16 18:15:32,948 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689531332558.4301d965c4312d19ab4e01adf2acb533.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689531332947"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689531332947"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689531332947"}]},"ts":"1689531332947"} 2023-07-16 18:15:32,948 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689531332558.35814aedff3426711613031993a15b0e.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689531332947"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689531332947"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689531332947"}]},"ts":"1689531332947"} 2023-07-16 18:15:32,951 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=61, ppid=60, state=RUNNABLE; OpenRegionProcedure 9e47427e6e7d83484bef64dc1523eabf, server=jenkins-hbase4.apache.org,41927,1689531323590}] 2023-07-16 18:15:32,952 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=62, ppid=56, state=RUNNABLE; OpenRegionProcedure 50422474b5c8d14c069cdc6e45e54638, server=jenkins-hbase4.apache.org,41927,1689531323590}] 2023-07-16 18:15:32,953 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=63, ppid=58, state=RUNNABLE; OpenRegionProcedure b7dd0f632e258203e2698c3fa6927d09, server=jenkins-hbase4.apache.org,41927,1689531323590}] 2023-07-16 18:15:32,955 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=64, ppid=57, state=RUNNABLE; OpenRegionProcedure 4301d965c4312d19ab4e01adf2acb533, server=jenkins-hbase4.apache.org,33809,1689531323219}] 2023-07-16 18:15:32,960 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=65, ppid=59, state=RUNNABLE; OpenRegionProcedure 35814aedff3426711613031993a15b0e, server=jenkins-hbase4.apache.org,33809,1689531323219}] 2023-07-16 18:15:33,115 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,zzzzz,1689531332558.9e47427e6e7d83484bef64dc1523eabf. 2023-07-16 18:15:33,115 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 9e47427e6e7d83484bef64dc1523eabf, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689531332558.9e47427e6e7d83484bef64dc1523eabf.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-16 18:15:33,116 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 9e47427e6e7d83484bef64dc1523eabf 2023-07-16 18:15:33,116 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,zzzzz,1689531332558.9e47427e6e7d83484bef64dc1523eabf.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 18:15:33,116 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 9e47427e6e7d83484bef64dc1523eabf 2023-07-16 18:15:33,116 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 9e47427e6e7d83484bef64dc1523eabf 2023-07-16 18:15:33,117 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(1230): Checking to see if procedure is done pid=55 2023-07-16 18:15:33,117 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689531332558.35814aedff3426711613031993a15b0e. 2023-07-16 18:15:33,117 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 35814aedff3426711613031993a15b0e, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689531332558.35814aedff3426711613031993a15b0e.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-16 18:15:33,117 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 35814aedff3426711613031993a15b0e 2023-07-16 18:15:33,117 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689531332558.35814aedff3426711613031993a15b0e.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 18:15:33,117 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 35814aedff3426711613031993a15b0e 2023-07-16 18:15:33,117 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 35814aedff3426711613031993a15b0e 2023-07-16 18:15:33,127 INFO [StoreOpener-9e47427e6e7d83484bef64dc1523eabf-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 9e47427e6e7d83484bef64dc1523eabf 2023-07-16 18:15:33,127 INFO [StoreOpener-35814aedff3426711613031993a15b0e-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 35814aedff3426711613031993a15b0e 2023-07-16 18:15:33,129 DEBUG [StoreOpener-35814aedff3426711613031993a15b0e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/Group_testTableMoveTruncateAndDrop/35814aedff3426711613031993a15b0e/f 2023-07-16 18:15:33,129 DEBUG [StoreOpener-35814aedff3426711613031993a15b0e-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/Group_testTableMoveTruncateAndDrop/35814aedff3426711613031993a15b0e/f 2023-07-16 18:15:33,129 DEBUG [StoreOpener-9e47427e6e7d83484bef64dc1523eabf-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/Group_testTableMoveTruncateAndDrop/9e47427e6e7d83484bef64dc1523eabf/f 2023-07-16 18:15:33,129 DEBUG [StoreOpener-9e47427e6e7d83484bef64dc1523eabf-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/Group_testTableMoveTruncateAndDrop/9e47427e6e7d83484bef64dc1523eabf/f 2023-07-16 18:15:33,130 INFO [StoreOpener-9e47427e6e7d83484bef64dc1523eabf-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 9e47427e6e7d83484bef64dc1523eabf columnFamilyName f 2023-07-16 18:15:33,130 INFO [StoreOpener-35814aedff3426711613031993a15b0e-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 35814aedff3426711613031993a15b0e columnFamilyName f 2023-07-16 18:15:33,131 INFO [StoreOpener-35814aedff3426711613031993a15b0e-1] regionserver.HStore(310): Store=35814aedff3426711613031993a15b0e/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 18:15:33,131 INFO [StoreOpener-9e47427e6e7d83484bef64dc1523eabf-1] regionserver.HStore(310): Store=9e47427e6e7d83484bef64dc1523eabf/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 18:15:33,132 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/Group_testTableMoveTruncateAndDrop/35814aedff3426711613031993a15b0e 2023-07-16 18:15:33,132 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/Group_testTableMoveTruncateAndDrop/9e47427e6e7d83484bef64dc1523eabf 2023-07-16 18:15:33,132 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/Group_testTableMoveTruncateAndDrop/35814aedff3426711613031993a15b0e 2023-07-16 18:15:33,132 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/Group_testTableMoveTruncateAndDrop/9e47427e6e7d83484bef64dc1523eabf 2023-07-16 18:15:33,136 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 35814aedff3426711613031993a15b0e 2023-07-16 18:15:33,137 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 9e47427e6e7d83484bef64dc1523eabf 2023-07-16 18:15:33,147 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/Group_testTableMoveTruncateAndDrop/9e47427e6e7d83484bef64dc1523eabf/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-16 18:15:33,147 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/Group_testTableMoveTruncateAndDrop/35814aedff3426711613031993a15b0e/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-16 18:15:33,148 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 9e47427e6e7d83484bef64dc1523eabf; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11255468000, jitterRate=0.048247143626213074}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 18:15:33,148 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 9e47427e6e7d83484bef64dc1523eabf: 2023-07-16 18:15:33,148 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 35814aedff3426711613031993a15b0e; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10812111680, jitterRate=0.006956368684768677}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 18:15:33,148 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 35814aedff3426711613031993a15b0e: 2023-07-16 18:15:33,151 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,zzzzz,1689531332558.9e47427e6e7d83484bef64dc1523eabf., pid=61, masterSystemTime=1689531333110 2023-07-16 18:15:33,151 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689531332558.35814aedff3426711613031993a15b0e., pid=65, masterSystemTime=1689531333112 2023-07-16 18:15:33,154 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689531332558.35814aedff3426711613031993a15b0e. 2023-07-16 18:15:33,154 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689531332558.35814aedff3426711613031993a15b0e. 2023-07-16 18:15:33,154 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,aaaaa,1689531332558.4301d965c4312d19ab4e01adf2acb533. 2023-07-16 18:15:33,154 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 4301d965c4312d19ab4e01adf2acb533, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689531332558.4301d965c4312d19ab4e01adf2acb533.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-16 18:15:33,155 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 4301d965c4312d19ab4e01adf2acb533 2023-07-16 18:15:33,155 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,aaaaa,1689531332558.4301d965c4312d19ab4e01adf2acb533.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 18:15:33,155 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 4301d965c4312d19ab4e01adf2acb533 2023-07-16 18:15:33,155 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 4301d965c4312d19ab4e01adf2acb533 2023-07-16 18:15:33,156 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=59 updating hbase:meta row=35814aedff3426711613031993a15b0e, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,33809,1689531323219 2023-07-16 18:15:33,157 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689531332558.35814aedff3426711613031993a15b0e.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689531333156"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689531333156"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689531333156"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689531333156"}]},"ts":"1689531333156"} 2023-07-16 18:15:33,157 INFO [StoreOpener-4301d965c4312d19ab4e01adf2acb533-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 4301d965c4312d19ab4e01adf2acb533 2023-07-16 18:15:33,158 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,zzzzz,1689531332558.9e47427e6e7d83484bef64dc1523eabf. 2023-07-16 18:15:33,158 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,zzzzz,1689531332558.9e47427e6e7d83484bef64dc1523eabf. 2023-07-16 18:15:33,159 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689531332558.b7dd0f632e258203e2698c3fa6927d09. 2023-07-16 18:15:33,159 DEBUG [StoreOpener-4301d965c4312d19ab4e01adf2acb533-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/Group_testTableMoveTruncateAndDrop/4301d965c4312d19ab4e01adf2acb533/f 2023-07-16 18:15:33,159 DEBUG [StoreOpener-4301d965c4312d19ab4e01adf2acb533-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/Group_testTableMoveTruncateAndDrop/4301d965c4312d19ab4e01adf2acb533/f 2023-07-16 18:15:33,159 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => b7dd0f632e258203e2698c3fa6927d09, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689531332558.b7dd0f632e258203e2698c3fa6927d09.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-16 18:15:33,160 INFO [StoreOpener-4301d965c4312d19ab4e01adf2acb533-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 4301d965c4312d19ab4e01adf2acb533 columnFamilyName f 2023-07-16 18:15:33,160 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop b7dd0f632e258203e2698c3fa6927d09 2023-07-16 18:15:33,160 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689531332558.b7dd0f632e258203e2698c3fa6927d09.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 18:15:33,160 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for b7dd0f632e258203e2698c3fa6927d09 2023-07-16 18:15:33,160 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for b7dd0f632e258203e2698c3fa6927d09 2023-07-16 18:15:33,160 INFO [StoreOpener-4301d965c4312d19ab4e01adf2acb533-1] regionserver.HStore(310): Store=4301d965c4312d19ab4e01adf2acb533/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 18:15:33,161 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/Group_testTableMoveTruncateAndDrop/4301d965c4312d19ab4e01adf2acb533 2023-07-16 18:15:33,162 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=60 updating hbase:meta row=9e47427e6e7d83484bef64dc1523eabf, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,41927,1689531323590 2023-07-16 18:15:33,162 INFO [StoreOpener-b7dd0f632e258203e2698c3fa6927d09-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region b7dd0f632e258203e2698c3fa6927d09 2023-07-16 18:15:33,162 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689531332558.9e47427e6e7d83484bef64dc1523eabf.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689531333162"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689531333162"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689531333162"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689531333162"}]},"ts":"1689531333162"} 2023-07-16 18:15:33,163 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/Group_testTableMoveTruncateAndDrop/4301d965c4312d19ab4e01adf2acb533 2023-07-16 18:15:33,166 DEBUG [StoreOpener-b7dd0f632e258203e2698c3fa6927d09-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/Group_testTableMoveTruncateAndDrop/b7dd0f632e258203e2698c3fa6927d09/f 2023-07-16 18:15:33,166 DEBUG [StoreOpener-b7dd0f632e258203e2698c3fa6927d09-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/Group_testTableMoveTruncateAndDrop/b7dd0f632e258203e2698c3fa6927d09/f 2023-07-16 18:15:33,167 INFO [StoreOpener-b7dd0f632e258203e2698c3fa6927d09-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region b7dd0f632e258203e2698c3fa6927d09 columnFamilyName f 2023-07-16 18:15:33,168 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 4301d965c4312d19ab4e01adf2acb533 2023-07-16 18:15:33,168 INFO [StoreOpener-b7dd0f632e258203e2698c3fa6927d09-1] regionserver.HStore(310): Store=b7dd0f632e258203e2698c3fa6927d09/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 18:15:33,169 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=65, resume processing ppid=59 2023-07-16 18:15:33,169 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=65, ppid=59, state=SUCCESS; OpenRegionProcedure 35814aedff3426711613031993a15b0e, server=jenkins-hbase4.apache.org,33809,1689531323219 in 205 msec 2023-07-16 18:15:33,169 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=61, resume processing ppid=60 2023-07-16 18:15:33,169 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=61, ppid=60, state=SUCCESS; OpenRegionProcedure 9e47427e6e7d83484bef64dc1523eabf, server=jenkins-hbase4.apache.org,41927,1689531323590 in 215 msec 2023-07-16 18:15:33,171 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=60, ppid=55, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=9e47427e6e7d83484bef64dc1523eabf, ASSIGN in 380 msec 2023-07-16 18:15:33,171 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=59, ppid=55, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=35814aedff3426711613031993a15b0e, ASSIGN in 380 msec 2023-07-16 18:15:33,171 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/Group_testTableMoveTruncateAndDrop/b7dd0f632e258203e2698c3fa6927d09 2023-07-16 18:15:33,172 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/Group_testTableMoveTruncateAndDrop/b7dd0f632e258203e2698c3fa6927d09 2023-07-16 18:15:33,179 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for b7dd0f632e258203e2698c3fa6927d09 2023-07-16 18:15:33,190 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/Group_testTableMoveTruncateAndDrop/4301d965c4312d19ab4e01adf2acb533/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-16 18:15:33,191 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 4301d965c4312d19ab4e01adf2acb533; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11112701120, jitterRate=0.03495094180107117}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 18:15:33,191 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 4301d965c4312d19ab4e01adf2acb533: 2023-07-16 18:15:33,191 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/Group_testTableMoveTruncateAndDrop/b7dd0f632e258203e2698c3fa6927d09/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-16 18:15:33,192 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened b7dd0f632e258203e2698c3fa6927d09; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11019816000, jitterRate=0.0263003408908844}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 18:15:33,192 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for b7dd0f632e258203e2698c3fa6927d09: 2023-07-16 18:15:33,193 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689531332558.b7dd0f632e258203e2698c3fa6927d09., pid=63, masterSystemTime=1689531333110 2023-07-16 18:15:33,195 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689531332558.b7dd0f632e258203e2698c3fa6927d09. 2023-07-16 18:15:33,195 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689531332558.b7dd0f632e258203e2698c3fa6927d09. 2023-07-16 18:15:33,195 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testTableMoveTruncateAndDrop,,1689531332558.50422474b5c8d14c069cdc6e45e54638. 2023-07-16 18:15:33,195 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 50422474b5c8d14c069cdc6e45e54638, NAME => 'Group_testTableMoveTruncateAndDrop,,1689531332558.50422474b5c8d14c069cdc6e45e54638.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-16 18:15:33,195 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testTableMoveTruncateAndDrop 50422474b5c8d14c069cdc6e45e54638 2023-07-16 18:15:33,195 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testTableMoveTruncateAndDrop,,1689531332558.50422474b5c8d14c069cdc6e45e54638.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 18:15:33,196 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 50422474b5c8d14c069cdc6e45e54638 2023-07-16 18:15:33,196 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 50422474b5c8d14c069cdc6e45e54638 2023-07-16 18:15:33,196 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=58 updating hbase:meta row=b7dd0f632e258203e2698c3fa6927d09, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,41927,1689531323590 2023-07-16 18:15:33,197 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689531332558.b7dd0f632e258203e2698c3fa6927d09.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689531333196"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689531333196"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689531333196"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689531333196"}]},"ts":"1689531333196"} 2023-07-16 18:15:33,197 INFO [StoreOpener-50422474b5c8d14c069cdc6e45e54638-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 50422474b5c8d14c069cdc6e45e54638 2023-07-16 18:15:33,198 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,aaaaa,1689531332558.4301d965c4312d19ab4e01adf2acb533., pid=64, masterSystemTime=1689531333112 2023-07-16 18:15:33,204 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,aaaaa,1689531332558.4301d965c4312d19ab4e01adf2acb533. 2023-07-16 18:15:33,204 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,aaaaa,1689531332558.4301d965c4312d19ab4e01adf2acb533. 2023-07-16 18:15:33,204 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=57 updating hbase:meta row=4301d965c4312d19ab4e01adf2acb533, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,33809,1689531323219 2023-07-16 18:15:33,205 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689531332558.4301d965c4312d19ab4e01adf2acb533.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689531333204"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689531333204"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689531333204"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689531333204"}]},"ts":"1689531333204"} 2023-07-16 18:15:33,205 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=63, resume processing ppid=58 2023-07-16 18:15:33,205 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=63, ppid=58, state=SUCCESS; OpenRegionProcedure b7dd0f632e258203e2698c3fa6927d09, server=jenkins-hbase4.apache.org,41927,1689531323590 in 246 msec 2023-07-16 18:15:33,208 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=58, ppid=55, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b7dd0f632e258203e2698c3fa6927d09, ASSIGN in 416 msec 2023-07-16 18:15:33,209 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=64, resume processing ppid=57 2023-07-16 18:15:33,209 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=64, ppid=57, state=SUCCESS; OpenRegionProcedure 4301d965c4312d19ab4e01adf2acb533, server=jenkins-hbase4.apache.org,33809,1689531323219 in 252 msec 2023-07-16 18:15:33,212 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=57, ppid=55, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=4301d965c4312d19ab4e01adf2acb533, ASSIGN in 421 msec 2023-07-16 18:15:33,213 DEBUG [StoreOpener-50422474b5c8d14c069cdc6e45e54638-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/Group_testTableMoveTruncateAndDrop/50422474b5c8d14c069cdc6e45e54638/f 2023-07-16 18:15:33,213 DEBUG [StoreOpener-50422474b5c8d14c069cdc6e45e54638-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/Group_testTableMoveTruncateAndDrop/50422474b5c8d14c069cdc6e45e54638/f 2023-07-16 18:15:33,214 INFO [StoreOpener-50422474b5c8d14c069cdc6e45e54638-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 50422474b5c8d14c069cdc6e45e54638 columnFamilyName f 2023-07-16 18:15:33,215 INFO [StoreOpener-50422474b5c8d14c069cdc6e45e54638-1] regionserver.HStore(310): Store=50422474b5c8d14c069cdc6e45e54638/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 18:15:33,216 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/Group_testTableMoveTruncateAndDrop/50422474b5c8d14c069cdc6e45e54638 2023-07-16 18:15:33,216 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/Group_testTableMoveTruncateAndDrop/50422474b5c8d14c069cdc6e45e54638 2023-07-16 18:15:33,220 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 50422474b5c8d14c069cdc6e45e54638 2023-07-16 18:15:33,230 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/Group_testTableMoveTruncateAndDrop/50422474b5c8d14c069cdc6e45e54638/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-16 18:15:33,231 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 50422474b5c8d14c069cdc6e45e54638; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9813300800, jitterRate=-0.0860651433467865}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 18:15:33,231 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 50422474b5c8d14c069cdc6e45e54638: 2023-07-16 18:15:33,232 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testTableMoveTruncateAndDrop,,1689531332558.50422474b5c8d14c069cdc6e45e54638., pid=62, masterSystemTime=1689531333110 2023-07-16 18:15:33,243 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=56 updating hbase:meta row=50422474b5c8d14c069cdc6e45e54638, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,41927,1689531323590 2023-07-16 18:15:33,244 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testTableMoveTruncateAndDrop,,1689531332558.50422474b5c8d14c069cdc6e45e54638.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689531333243"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689531333243"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689531333243"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689531333243"}]},"ts":"1689531333243"} 2023-07-16 18:15:33,247 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testTableMoveTruncateAndDrop,,1689531332558.50422474b5c8d14c069cdc6e45e54638. 2023-07-16 18:15:33,247 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testTableMoveTruncateAndDrop,,1689531332558.50422474b5c8d14c069cdc6e45e54638. 2023-07-16 18:15:33,250 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=62, resume processing ppid=56 2023-07-16 18:15:33,251 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=62, ppid=56, state=SUCCESS; OpenRegionProcedure 50422474b5c8d14c069cdc6e45e54638, server=jenkins-hbase4.apache.org,41927,1689531323590 in 294 msec 2023-07-16 18:15:33,254 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=56, resume processing ppid=55 2023-07-16 18:15:33,255 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=56, ppid=55, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=50422474b5c8d14c069cdc6e45e54638, ASSIGN in 462 msec 2023-07-16 18:15:33,255 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689531333255"}]},"ts":"1689531333255"} 2023-07-16 18:15:33,257 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=ENABLED in hbase:meta 2023-07-16 18:15:33,259 DEBUG [PEWorker-1] procedure.TruncateTableProcedure(145): truncate 'Group_testTableMoveTruncateAndDrop' completed 2023-07-16 18:15:33,261 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=55, state=SUCCESS; TruncateTableProcedure (table=Group_testTableMoveTruncateAndDrop preserveSplits=true) in 758 msec 2023-07-16 18:15:33,618 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(1230): Checking to see if procedure is done pid=55 2023-07-16 18:15:33,619 INFO [Listener at localhost/38073] client.HBaseAdmin$TableFuture(3541): Operation: TRUNCATE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 55 completed 2023-07-16 18:15:33,620 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_1745006964 2023-07-16 18:15:33,620 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 18:15:33,622 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_1745006964 2023-07-16 18:15:33,622 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 18:15:33,623 INFO [Listener at localhost/38073] client.HBaseAdmin$15(890): Started disable of Group_testTableMoveTruncateAndDrop 2023-07-16 18:15:33,623 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testTableMoveTruncateAndDrop 2023-07-16 18:15:33,625 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] procedure2.ProcedureExecutor(1029): Stored pid=66, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-16 18:15:33,628 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(1230): Checking to see if procedure is done pid=66 2023-07-16 18:15:33,630 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689531333630"}]},"ts":"1689531333630"} 2023-07-16 18:15:33,632 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLING in hbase:meta 2023-07-16 18:15:33,635 INFO [PEWorker-4] procedure.DisableTableProcedure(293): Set Group_testTableMoveTruncateAndDrop to state=DISABLING 2023-07-16 18:15:33,635 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=67, ppid=66, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=50422474b5c8d14c069cdc6e45e54638, UNASSIGN}, {pid=68, ppid=66, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=4301d965c4312d19ab4e01adf2acb533, UNASSIGN}, {pid=69, ppid=66, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b7dd0f632e258203e2698c3fa6927d09, UNASSIGN}, {pid=70, ppid=66, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=35814aedff3426711613031993a15b0e, UNASSIGN}, {pid=71, ppid=66, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=9e47427e6e7d83484bef64dc1523eabf, UNASSIGN}] 2023-07-16 18:15:33,637 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=68, ppid=66, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=4301d965c4312d19ab4e01adf2acb533, UNASSIGN 2023-07-16 18:15:33,637 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=67, ppid=66, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=50422474b5c8d14c069cdc6e45e54638, UNASSIGN 2023-07-16 18:15:33,638 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=69, ppid=66, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b7dd0f632e258203e2698c3fa6927d09, UNASSIGN 2023-07-16 18:15:33,638 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=70, ppid=66, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=35814aedff3426711613031993a15b0e, UNASSIGN 2023-07-16 18:15:33,638 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=71, ppid=66, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=9e47427e6e7d83484bef64dc1523eabf, UNASSIGN 2023-07-16 18:15:33,639 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=68 updating hbase:meta row=4301d965c4312d19ab4e01adf2acb533, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,33809,1689531323219 2023-07-16 18:15:33,639 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689531332558.4301d965c4312d19ab4e01adf2acb533.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689531333638"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689531333638"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689531333638"}]},"ts":"1689531333638"} 2023-07-16 18:15:33,639 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=67 updating hbase:meta row=50422474b5c8d14c069cdc6e45e54638, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41927,1689531323590 2023-07-16 18:15:33,639 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=69 updating hbase:meta row=b7dd0f632e258203e2698c3fa6927d09, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41927,1689531323590 2023-07-16 18:15:33,639 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,,1689531332558.50422474b5c8d14c069cdc6e45e54638.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689531333639"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689531333639"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689531333639"}]},"ts":"1689531333639"} 2023-07-16 18:15:33,639 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689531332558.b7dd0f632e258203e2698c3fa6927d09.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689531333639"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689531333639"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689531333639"}]},"ts":"1689531333639"} 2023-07-16 18:15:33,639 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=70 updating hbase:meta row=35814aedff3426711613031993a15b0e, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,33809,1689531323219 2023-07-16 18:15:33,640 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689531332558.35814aedff3426711613031993a15b0e.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689531333639"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689531333639"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689531333639"}]},"ts":"1689531333639"} 2023-07-16 18:15:33,640 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=71 updating hbase:meta row=9e47427e6e7d83484bef64dc1523eabf, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41927,1689531323590 2023-07-16 18:15:33,640 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689531332558.9e47427e6e7d83484bef64dc1523eabf.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689531333640"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689531333640"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689531333640"}]},"ts":"1689531333640"} 2023-07-16 18:15:33,643 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=72, ppid=68, state=RUNNABLE; CloseRegionProcedure 4301d965c4312d19ab4e01adf2acb533, server=jenkins-hbase4.apache.org,33809,1689531323219}] 2023-07-16 18:15:33,645 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=73, ppid=67, state=RUNNABLE; CloseRegionProcedure 50422474b5c8d14c069cdc6e45e54638, server=jenkins-hbase4.apache.org,41927,1689531323590}] 2023-07-16 18:15:33,651 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=74, ppid=69, state=RUNNABLE; CloseRegionProcedure b7dd0f632e258203e2698c3fa6927d09, server=jenkins-hbase4.apache.org,41927,1689531323590}] 2023-07-16 18:15:33,652 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=75, ppid=70, state=RUNNABLE; CloseRegionProcedure 35814aedff3426711613031993a15b0e, server=jenkins-hbase4.apache.org,33809,1689531323219}] 2023-07-16 18:15:33,653 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=76, ppid=71, state=RUNNABLE; CloseRegionProcedure 9e47427e6e7d83484bef64dc1523eabf, server=jenkins-hbase4.apache.org,41927,1689531323590}] 2023-07-16 18:15:33,730 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(1230): Checking to see if procedure is done pid=66 2023-07-16 18:15:33,795 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 35814aedff3426711613031993a15b0e 2023-07-16 18:15:33,797 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 35814aedff3426711613031993a15b0e, disabling compactions & flushes 2023-07-16 18:15:33,797 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689531332558.35814aedff3426711613031993a15b0e. 2023-07-16 18:15:33,797 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689531332558.35814aedff3426711613031993a15b0e. 2023-07-16 18:15:33,797 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689531332558.35814aedff3426711613031993a15b0e. after waiting 0 ms 2023-07-16 18:15:33,797 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689531332558.35814aedff3426711613031993a15b0e. 2023-07-16 18:15:33,797 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 50422474b5c8d14c069cdc6e45e54638 2023-07-16 18:15:33,798 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 50422474b5c8d14c069cdc6e45e54638, disabling compactions & flushes 2023-07-16 18:15:33,799 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,,1689531332558.50422474b5c8d14c069cdc6e45e54638. 2023-07-16 18:15:33,799 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,,1689531332558.50422474b5c8d14c069cdc6e45e54638. 2023-07-16 18:15:33,799 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,,1689531332558.50422474b5c8d14c069cdc6e45e54638. after waiting 0 ms 2023-07-16 18:15:33,799 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,,1689531332558.50422474b5c8d14c069cdc6e45e54638. 2023-07-16 18:15:33,806 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/Group_testTableMoveTruncateAndDrop/35814aedff3426711613031993a15b0e/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-16 18:15:33,807 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/Group_testTableMoveTruncateAndDrop/50422474b5c8d14c069cdc6e45e54638/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-16 18:15:33,808 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689531332558.35814aedff3426711613031993a15b0e. 2023-07-16 18:15:33,808 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 35814aedff3426711613031993a15b0e: 2023-07-16 18:15:33,808 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,,1689531332558.50422474b5c8d14c069cdc6e45e54638. 2023-07-16 18:15:33,808 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 50422474b5c8d14c069cdc6e45e54638: 2023-07-16 18:15:33,812 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 35814aedff3426711613031993a15b0e 2023-07-16 18:15:33,812 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=70 updating hbase:meta row=35814aedff3426711613031993a15b0e, regionState=CLOSED 2023-07-16 18:15:33,812 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 4301d965c4312d19ab4e01adf2acb533 2023-07-16 18:15:33,812 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689531332558.35814aedff3426711613031993a15b0e.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689531333812"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689531333812"}]},"ts":"1689531333812"} 2023-07-16 18:15:33,813 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 50422474b5c8d14c069cdc6e45e54638 2023-07-16 18:15:33,814 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close b7dd0f632e258203e2698c3fa6927d09 2023-07-16 18:15:33,813 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 4301d965c4312d19ab4e01adf2acb533, disabling compactions & flushes 2023-07-16 18:15:33,814 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,aaaaa,1689531332558.4301d965c4312d19ab4e01adf2acb533. 2023-07-16 18:15:33,814 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689531332558.4301d965c4312d19ab4e01adf2acb533. 2023-07-16 18:15:33,814 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,aaaaa,1689531332558.4301d965c4312d19ab4e01adf2acb533. after waiting 0 ms 2023-07-16 18:15:33,814 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,aaaaa,1689531332558.4301d965c4312d19ab4e01adf2acb533. 2023-07-16 18:15:33,815 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing b7dd0f632e258203e2698c3fa6927d09, disabling compactions & flushes 2023-07-16 18:15:33,815 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=67 updating hbase:meta row=50422474b5c8d14c069cdc6e45e54638, regionState=CLOSED 2023-07-16 18:15:33,815 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689531332558.b7dd0f632e258203e2698c3fa6927d09. 2023-07-16 18:15:33,815 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689531332558.b7dd0f632e258203e2698c3fa6927d09. 2023-07-16 18:15:33,815 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689531332558.b7dd0f632e258203e2698c3fa6927d09. after waiting 0 ms 2023-07-16 18:15:33,815 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,,1689531332558.50422474b5c8d14c069cdc6e45e54638.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689531333815"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689531333815"}]},"ts":"1689531333815"} 2023-07-16 18:15:33,815 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689531332558.b7dd0f632e258203e2698c3fa6927d09. 2023-07-16 18:15:33,826 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=75, resume processing ppid=70 2023-07-16 18:15:33,826 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=75, ppid=70, state=SUCCESS; CloseRegionProcedure 35814aedff3426711613031993a15b0e, server=jenkins-hbase4.apache.org,33809,1689531323219 in 165 msec 2023-07-16 18:15:33,831 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=73, resume processing ppid=67 2023-07-16 18:15:33,831 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=73, ppid=67, state=SUCCESS; CloseRegionProcedure 50422474b5c8d14c069cdc6e45e54638, server=jenkins-hbase4.apache.org,41927,1689531323590 in 173 msec 2023-07-16 18:15:33,831 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/Group_testTableMoveTruncateAndDrop/4301d965c4312d19ab4e01adf2acb533/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-16 18:15:33,832 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/Group_testTableMoveTruncateAndDrop/b7dd0f632e258203e2698c3fa6927d09/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-16 18:15:33,832 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,aaaaa,1689531332558.4301d965c4312d19ab4e01adf2acb533. 2023-07-16 18:15:33,833 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 4301d965c4312d19ab4e01adf2acb533: 2023-07-16 18:15:33,833 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689531332558.b7dd0f632e258203e2698c3fa6927d09. 2023-07-16 18:15:33,833 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for b7dd0f632e258203e2698c3fa6927d09: 2023-07-16 18:15:33,835 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=70, ppid=66, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=35814aedff3426711613031993a15b0e, UNASSIGN in 191 msec 2023-07-16 18:15:33,836 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 4301d965c4312d19ab4e01adf2acb533 2023-07-16 18:15:33,839 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=67, ppid=66, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=50422474b5c8d14c069cdc6e45e54638, UNASSIGN in 196 msec 2023-07-16 18:15:33,839 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed b7dd0f632e258203e2698c3fa6927d09 2023-07-16 18:15:33,839 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 9e47427e6e7d83484bef64dc1523eabf 2023-07-16 18:15:33,840 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 9e47427e6e7d83484bef64dc1523eabf, disabling compactions & flushes 2023-07-16 18:15:33,841 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=68 updating hbase:meta row=4301d965c4312d19ab4e01adf2acb533, regionState=CLOSED 2023-07-16 18:15:33,841 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testTableMoveTruncateAndDrop,zzzzz,1689531332558.9e47427e6e7d83484bef64dc1523eabf. 2023-07-16 18:15:33,841 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=69 updating hbase:meta row=b7dd0f632e258203e2698c3fa6927d09, regionState=CLOSED 2023-07-16 18:15:33,841 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689531332558.4301d965c4312d19ab4e01adf2acb533.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689531333839"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689531333839"}]},"ts":"1689531333839"} 2023-07-16 18:15:33,841 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689531332558.9e47427e6e7d83484bef64dc1523eabf. 2023-07-16 18:15:33,841 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689531332558.b7dd0f632e258203e2698c3fa6927d09.","families":{"info":[{"qualifier":"regioninfo","vlen":78,"tag":[],"timestamp":"1689531333841"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689531333841"}]},"ts":"1689531333841"} 2023-07-16 18:15:33,841 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testTableMoveTruncateAndDrop,zzzzz,1689531332558.9e47427e6e7d83484bef64dc1523eabf. after waiting 0 ms 2023-07-16 18:15:33,841 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testTableMoveTruncateAndDrop,zzzzz,1689531332558.9e47427e6e7d83484bef64dc1523eabf. 2023-07-16 18:15:33,847 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=74, resume processing ppid=69 2023-07-16 18:15:33,847 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=74, ppid=69, state=SUCCESS; CloseRegionProcedure b7dd0f632e258203e2698c3fa6927d09, server=jenkins-hbase4.apache.org,41927,1689531323590 in 193 msec 2023-07-16 18:15:33,847 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=72, resume processing ppid=68 2023-07-16 18:15:33,847 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=72, ppid=68, state=SUCCESS; CloseRegionProcedure 4301d965c4312d19ab4e01adf2acb533, server=jenkins-hbase4.apache.org,33809,1689531323219 in 200 msec 2023-07-16 18:15:33,848 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/Group_testTableMoveTruncateAndDrop/9e47427e6e7d83484bef64dc1523eabf/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-16 18:15:33,848 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=68, ppid=66, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=4301d965c4312d19ab4e01adf2acb533, UNASSIGN in 212 msec 2023-07-16 18:15:33,848 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=69, ppid=66, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=b7dd0f632e258203e2698c3fa6927d09, UNASSIGN in 212 msec 2023-07-16 18:15:33,849 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testTableMoveTruncateAndDrop,zzzzz,1689531332558.9e47427e6e7d83484bef64dc1523eabf. 2023-07-16 18:15:33,849 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 9e47427e6e7d83484bef64dc1523eabf: 2023-07-16 18:15:33,850 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 9e47427e6e7d83484bef64dc1523eabf 2023-07-16 18:15:33,851 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=71 updating hbase:meta row=9e47427e6e7d83484bef64dc1523eabf, regionState=CLOSED 2023-07-16 18:15:33,851 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689531332558.9e47427e6e7d83484bef64dc1523eabf.","families":{"info":[{"qualifier":"regioninfo","vlen":73,"tag":[],"timestamp":"1689531333851"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689531333851"}]},"ts":"1689531333851"} 2023-07-16 18:15:33,854 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=76, resume processing ppid=71 2023-07-16 18:15:33,854 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=76, ppid=71, state=SUCCESS; CloseRegionProcedure 9e47427e6e7d83484bef64dc1523eabf, server=jenkins-hbase4.apache.org,41927,1689531323590 in 199 msec 2023-07-16 18:15:33,859 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=71, resume processing ppid=66 2023-07-16 18:15:33,859 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=71, ppid=66, state=SUCCESS; TransitRegionStateProcedure table=Group_testTableMoveTruncateAndDrop, region=9e47427e6e7d83484bef64dc1523eabf, UNASSIGN in 219 msec 2023-07-16 18:15:33,863 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689531333863"}]},"ts":"1689531333863"} 2023-07-16 18:15:33,866 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testTableMoveTruncateAndDrop, state=DISABLED in hbase:meta 2023-07-16 18:15:33,868 INFO [PEWorker-2] procedure.DisableTableProcedure(305): Set Group_testTableMoveTruncateAndDrop to state=DISABLED 2023-07-16 18:15:33,870 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=66, state=SUCCESS; DisableTableProcedure table=Group_testTableMoveTruncateAndDrop in 244 msec 2023-07-16 18:15:33,931 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(1230): Checking to see if procedure is done pid=66 2023-07-16 18:15:33,932 INFO [Listener at localhost/38073] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 66 completed 2023-07-16 18:15:33,938 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete Group_testTableMoveTruncateAndDrop 2023-07-16 18:15:33,945 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] procedure2.ProcedureExecutor(1029): Stored pid=77, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-16 18:15:33,948 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=77, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-16 18:15:33,948 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testTableMoveTruncateAndDrop' from rsgroup 'Group_testTableMoveTruncateAndDrop_1745006964' 2023-07-16 18:15:33,950 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=77, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-16 18:15:33,957 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_1745006964 2023-07-16 18:15:33,958 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 18:15:33,958 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 18:15:33,959 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-16 18:15:33,967 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/default/Group_testTableMoveTruncateAndDrop/50422474b5c8d14c069cdc6e45e54638 2023-07-16 18:15:33,967 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/default/Group_testTableMoveTruncateAndDrop/35814aedff3426711613031993a15b0e 2023-07-16 18:15:33,967 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b7dd0f632e258203e2698c3fa6927d09 2023-07-16 18:15:33,967 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/default/Group_testTableMoveTruncateAndDrop/4301d965c4312d19ab4e01adf2acb533 2023-07-16 18:15:33,967 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/default/Group_testTableMoveTruncateAndDrop/9e47427e6e7d83484bef64dc1523eabf 2023-07-16 18:15:33,969 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(1230): Checking to see if procedure is done pid=77 2023-07-16 18:15:33,970 DEBUG [HFileArchiver-7] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/default/Group_testTableMoveTruncateAndDrop/50422474b5c8d14c069cdc6e45e54638/f, FileablePath, hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/default/Group_testTableMoveTruncateAndDrop/50422474b5c8d14c069cdc6e45e54638/recovered.edits] 2023-07-16 18:15:33,970 DEBUG [HFileArchiver-3] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/default/Group_testTableMoveTruncateAndDrop/9e47427e6e7d83484bef64dc1523eabf/f, FileablePath, hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/default/Group_testTableMoveTruncateAndDrop/9e47427e6e7d83484bef64dc1523eabf/recovered.edits] 2023-07-16 18:15:33,971 DEBUG [HFileArchiver-5] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/default/Group_testTableMoveTruncateAndDrop/35814aedff3426711613031993a15b0e/f, FileablePath, hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/default/Group_testTableMoveTruncateAndDrop/35814aedff3426711613031993a15b0e/recovered.edits] 2023-07-16 18:15:33,971 DEBUG [HFileArchiver-8] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b7dd0f632e258203e2698c3fa6927d09/f, FileablePath, hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b7dd0f632e258203e2698c3fa6927d09/recovered.edits] 2023-07-16 18:15:33,973 DEBUG [HFileArchiver-1] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/default/Group_testTableMoveTruncateAndDrop/4301d965c4312d19ab4e01adf2acb533/f, FileablePath, hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/default/Group_testTableMoveTruncateAndDrop/4301d965c4312d19ab4e01adf2acb533/recovered.edits] 2023-07-16 18:15:33,982 DEBUG [HFileArchiver-7] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/default/Group_testTableMoveTruncateAndDrop/50422474b5c8d14c069cdc6e45e54638/recovered.edits/4.seqid to hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/archive/data/default/Group_testTableMoveTruncateAndDrop/50422474b5c8d14c069cdc6e45e54638/recovered.edits/4.seqid 2023-07-16 18:15:33,983 DEBUG [HFileArchiver-7] backup.HFileArchiver(596): Deleted hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/default/Group_testTableMoveTruncateAndDrop/50422474b5c8d14c069cdc6e45e54638 2023-07-16 18:15:33,984 DEBUG [HFileArchiver-8] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b7dd0f632e258203e2698c3fa6927d09/recovered.edits/4.seqid to hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/archive/data/default/Group_testTableMoveTruncateAndDrop/b7dd0f632e258203e2698c3fa6927d09/recovered.edits/4.seqid 2023-07-16 18:15:33,985 DEBUG [HFileArchiver-5] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/default/Group_testTableMoveTruncateAndDrop/35814aedff3426711613031993a15b0e/recovered.edits/4.seqid to hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/archive/data/default/Group_testTableMoveTruncateAndDrop/35814aedff3426711613031993a15b0e/recovered.edits/4.seqid 2023-07-16 18:15:33,985 DEBUG [HFileArchiver-8] backup.HFileArchiver(596): Deleted hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/default/Group_testTableMoveTruncateAndDrop/b7dd0f632e258203e2698c3fa6927d09 2023-07-16 18:15:33,986 DEBUG [HFileArchiver-5] backup.HFileArchiver(596): Deleted hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/default/Group_testTableMoveTruncateAndDrop/35814aedff3426711613031993a15b0e 2023-07-16 18:15:33,987 DEBUG [HFileArchiver-3] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/default/Group_testTableMoveTruncateAndDrop/9e47427e6e7d83484bef64dc1523eabf/recovered.edits/4.seqid to hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/archive/data/default/Group_testTableMoveTruncateAndDrop/9e47427e6e7d83484bef64dc1523eabf/recovered.edits/4.seqid 2023-07-16 18:15:33,987 DEBUG [HFileArchiver-3] backup.HFileArchiver(596): Deleted hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/default/Group_testTableMoveTruncateAndDrop/9e47427e6e7d83484bef64dc1523eabf 2023-07-16 18:15:33,989 DEBUG [HFileArchiver-1] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/default/Group_testTableMoveTruncateAndDrop/4301d965c4312d19ab4e01adf2acb533/recovered.edits/4.seqid to hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/archive/data/default/Group_testTableMoveTruncateAndDrop/4301d965c4312d19ab4e01adf2acb533/recovered.edits/4.seqid 2023-07-16 18:15:33,989 DEBUG [HFileArchiver-1] backup.HFileArchiver(596): Deleted hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/default/Group_testTableMoveTruncateAndDrop/4301d965c4312d19ab4e01adf2acb533 2023-07-16 18:15:33,989 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived Group_testTableMoveTruncateAndDrop regions 2023-07-16 18:15:33,993 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=77, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-16 18:15:34,003 WARN [PEWorker-4] procedure.DeleteTableProcedure(384): Deleting some vestigial 5 rows of Group_testTableMoveTruncateAndDrop from hbase:meta 2023-07-16 18:15:34,012 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(421): Removing 'Group_testTableMoveTruncateAndDrop' descriptor. 2023-07-16 18:15:34,014 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=77, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-16 18:15:34,014 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(411): Removing 'Group_testTableMoveTruncateAndDrop' from region states. 2023-07-16 18:15:34,014 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,,1689531332558.50422474b5c8d14c069cdc6e45e54638.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689531334014"}]},"ts":"9223372036854775807"} 2023-07-16 18:15:34,014 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,aaaaa,1689531332558.4301d965c4312d19ab4e01adf2acb533.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689531334014"}]},"ts":"9223372036854775807"} 2023-07-16 18:15:34,014 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,i\\xBF\\x14i\\xBE,1689531332558.b7dd0f632e258203e2698c3fa6927d09.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689531334014"}]},"ts":"9223372036854775807"} 2023-07-16 18:15:34,014 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,r\\x1C\\xC7r\\x1B,1689531332558.35814aedff3426711613031993a15b0e.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689531334014"}]},"ts":"9223372036854775807"} 2023-07-16 18:15:34,014 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop,zzzzz,1689531332558.9e47427e6e7d83484bef64dc1523eabf.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689531334014"}]},"ts":"9223372036854775807"} 2023-07-16 18:15:34,017 INFO [PEWorker-4] hbase.MetaTableAccessor(1788): Deleted 5 regions from META 2023-07-16 18:15:34,017 DEBUG [PEWorker-4] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 50422474b5c8d14c069cdc6e45e54638, NAME => 'Group_testTableMoveTruncateAndDrop,,1689531332558.50422474b5c8d14c069cdc6e45e54638.', STARTKEY => '', ENDKEY => 'aaaaa'}, {ENCODED => 4301d965c4312d19ab4e01adf2acb533, NAME => 'Group_testTableMoveTruncateAndDrop,aaaaa,1689531332558.4301d965c4312d19ab4e01adf2acb533.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, {ENCODED => b7dd0f632e258203e2698c3fa6927d09, NAME => 'Group_testTableMoveTruncateAndDrop,i\xBF\x14i\xBE,1689531332558.b7dd0f632e258203e2698c3fa6927d09.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, {ENCODED => 35814aedff3426711613031993a15b0e, NAME => 'Group_testTableMoveTruncateAndDrop,r\x1C\xC7r\x1B,1689531332558.35814aedff3426711613031993a15b0e.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, {ENCODED => 9e47427e6e7d83484bef64dc1523eabf, NAME => 'Group_testTableMoveTruncateAndDrop,zzzzz,1689531332558.9e47427e6e7d83484bef64dc1523eabf.', STARTKEY => 'zzzzz', ENDKEY => ''}] 2023-07-16 18:15:34,017 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(415): Marking 'Group_testTableMoveTruncateAndDrop' as deleted. 2023-07-16 18:15:34,017 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testTableMoveTruncateAndDrop","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689531334017"}]},"ts":"9223372036854775807"} 2023-07-16 18:15:34,019 INFO [PEWorker-4] hbase.MetaTableAccessor(1658): Deleted table Group_testTableMoveTruncateAndDrop state from META 2023-07-16 18:15:34,022 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(130): Finished pid=77, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop 2023-07-16 18:15:34,023 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=77, state=SUCCESS; DeleteTableProcedure table=Group_testTableMoveTruncateAndDrop in 82 msec 2023-07-16 18:15:34,071 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(1230): Checking to see if procedure is done pid=77 2023-07-16 18:15:34,072 INFO [Listener at localhost/38073] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testTableMoveTruncateAndDrop, procId: 77 completed 2023-07-16 18:15:34,073 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testTableMoveTruncateAndDrop_1745006964 2023-07-16 18:15:34,073 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 18:15:34,080 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 18:15:34,080 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 18:15:34,081 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-16 18:15:34,082 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-16 18:15:34,082 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 18:15:34,083 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:33809, jenkins-hbase4.apache.org:41927] to rsgroup default 2023-07-16 18:15:34,086 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testTableMoveTruncateAndDrop_1745006964 2023-07-16 18:15:34,087 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 18:15:34,087 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 18:15:34,088 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-16 18:15:34,093 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testTableMoveTruncateAndDrop_1745006964, current retry=0 2023-07-16 18:15:34,093 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,33809,1689531323219, jenkins-hbase4.apache.org,41927,1689531323590] are moved back to Group_testTableMoveTruncateAndDrop_1745006964 2023-07-16 18:15:34,093 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminServer(438): Move servers done: Group_testTableMoveTruncateAndDrop_1745006964 => default 2023-07-16 18:15:34,093 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 18:15:34,100 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_testTableMoveTruncateAndDrop_1745006964 2023-07-16 18:15:34,105 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 18:15:34,105 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 18:15:34,105 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-16 18:15:34,107 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-16 18:15:34,108 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-16 18:15:34,109 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-16 18:15:34,109 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 18:15:34,110 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-16 18:15:34,110 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 18:15:34,110 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-16 18:15:34,114 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 18:15:34,114 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-16 18:15:34,116 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-16 18:15:34,119 INFO [Listener at localhost/38073] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-16 18:15:34,120 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-16 18:15:34,122 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 18:15:34,123 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 18:15:34,126 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-16 18:15:34,127 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-16 18:15:34,130 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 18:15:34,130 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 18:15:34,133 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:45445] to rsgroup master 2023-07-16 18:15:34,133 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45445 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 18:15:34,133 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] ipc.CallRunner(144): callId: 147 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:45244 deadline: 1689532534133, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45445 is either offline or it does not exist. 2023-07-16 18:15:34,133 WARN [Listener at localhost/38073] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45445 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45445 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-16 18:15:34,136 INFO [Listener at localhost/38073] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 18:15:34,136 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 18:15:34,137 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 18:15:34,137 INFO [Listener at localhost/38073] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33809, jenkins-hbase4.apache.org:41927, jenkins-hbase4.apache.org:43375, jenkins-hbase4.apache.org:44563], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-16 18:15:34,138 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-16 18:15:34,138 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 18:15:34,177 INFO [Listener at localhost/38073] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testTableMoveTruncateAndDrop Thread=505 (was 423) Potentially hanging thread: RS:3;jenkins-hbase4:44563 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_239416957_17 at /127.0.0.1:50620 [Receiving block BP-761019734-172.31.14.131-1689531317183:blk_1073741840_1016] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-5 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-4 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-761019734-172.31.14.131-1689531317183:blk_1073741840_1016, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-6 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x13eab7c0-shared-pool-8 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp353812516-641 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=44563 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Session-HouseKeeper-5bcd3e79-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp353812516-639 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.3@localhost:36523 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp353812516-640 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44563 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: hconnection-0x67375dfc-shared-pool-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_2036412677_17 at /127.0.0.1:50674 [Waiting for operation #7] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase4:44563Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_239416957_17 at /127.0.0.1:50652 [Receiving block BP-761019734-172.31.14.131-1689531317183:blk_1073741843_1019] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:53498@0x03688c79-SendThread(127.0.0.1:53498) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=44563 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: HFileArchiver-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp353812516-643 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-8 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_239416957_17 at /127.0.0.1:44170 [Receiving block BP-761019734-172.31.14.131-1689531317183:blk_1073741840_1016] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x67375dfc-shared-pool-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp353812516-642 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer for 'HBase' metrics system java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_239416957_17 at /127.0.0.1:44204 [Receiving block BP-761019734-172.31.14.131-1689531317183:blk_1073741843_1019] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:53498@0x03688c79-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1956043164_17 at /127.0.0.1:44192 [Waiting for operation #7] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:53498@0x03688c79 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/147357128.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-10 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x13eab7c0-shared-pool-10 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-761019734-172.31.14.131-1689531317183:blk_1073741843_1019, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-761019734-172.31.14.131-1689531317183:blk_1073741843_1019, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=44563 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: hconnection-0x13eab7c0-shared-pool-11 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-761019734-172.31.14.131-1689531317183:blk_1073741840_1016, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=44563 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_239416957_17 at /127.0.0.1:52434 [Receiving block BP-761019734-172.31.14.131-1689531317183:blk_1073741840_1016] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp353812516-637 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/472517021.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-8 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-761019734-172.31.14.131-1689531317183:blk_1073741843_1019, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp353812516-638-acceptor-0@125b6eac-ServerConnector@4fd2dab2{HTTP/1.1, (http/1.1)}{0.0.0.0:45161} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:3;jenkins-hbase4:44563-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x13eab7c0-shared-pool-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-761019734-172.31.14.131-1689531317183:blk_1073741840_1016, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=44563 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: hconnection-0x13eab7c0-shared-pool-9 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-7-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_META-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-7 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=44563 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_239416957_17 at /127.0.0.1:52476 [Receiving block BP-761019734-172.31.14.131-1689531317183:blk_1073741843_1019] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp353812516-644 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92-prefix:jenkins-hbase4.apache.org,44563,1689531327107 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=44563 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1956043164_17 at /127.0.0.1:44164 [Waiting for operation #11] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-9 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=44563 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: hconnection-0x13eab7c0-shared-pool-12 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: HFileArchiver-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-3 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=44563 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-7-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92-prefix:jenkins-hbase4.apache.org,44563,1689531327107.meta sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1806136717) connection to localhost/127.0.0.1:36523 from jenkins.hfs.3 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) - Thread LEAK? -, OpenFileDescriptor=822 (was 679) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=429 (was 413) - SystemLoadAverage LEAK? -, ProcessCount=173 (was 173), AvailableMemoryMB=3249 (was 3771) 2023-07-16 18:15:34,179 WARN [Listener at localhost/38073] hbase.ResourceChecker(130): Thread=505 is superior to 500 2023-07-16 18:15:34,201 INFO [Listener at localhost/38073] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testValidGroupNames Thread=505, OpenFileDescriptor=822, MaxFileDescriptor=60000, SystemLoadAverage=429, ProcessCount=173, AvailableMemoryMB=3246 2023-07-16 18:15:34,201 WARN [Listener at localhost/38073] hbase.ResourceChecker(130): Thread=505 is superior to 500 2023-07-16 18:15:34,201 INFO [Listener at localhost/38073] rsgroup.TestRSGroupsBase(132): testValidGroupNames 2023-07-16 18:15:34,209 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 18:15:34,209 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 18:15:34,211 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-16 18:15:34,211 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-16 18:15:34,211 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 18:15:34,212 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-16 18:15:34,212 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 18:15:34,213 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-16 18:15:34,217 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 18:15:34,218 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-16 18:15:34,220 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-16 18:15:34,224 INFO [Listener at localhost/38073] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-16 18:15:34,225 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-16 18:15:34,227 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 18:15:34,228 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 18:15:34,230 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-16 18:15:34,233 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-16 18:15:34,237 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 18:15:34,237 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 18:15:34,239 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:45445] to rsgroup master 2023-07-16 18:15:34,239 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45445 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 18:15:34,240 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] ipc.CallRunner(144): callId: 175 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:45244 deadline: 1689532534239, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45445 is either offline or it does not exist. 2023-07-16 18:15:34,240 WARN [Listener at localhost/38073] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45445 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45445 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-16 18:15:34,242 INFO [Listener at localhost/38073] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 18:15:34,243 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 18:15:34,243 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 18:15:34,243 INFO [Listener at localhost/38073] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33809, jenkins-hbase4.apache.org:41927, jenkins-hbase4.apache.org:43375, jenkins-hbase4.apache.org:44563], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-16 18:15:34,244 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-16 18:15:34,244 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 18:15:34,245 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup foo* 2023-07-16 18:15:34,246 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.checkGroupName(RSGroupInfoManagerImpl.java:932) at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.addRSGroup(RSGroupInfoManagerImpl.java:205) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.addRSGroup(RSGroupAdminServer.java:476) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.addRSGroup(RSGroupAdminEndpoint.java:258) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16203) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 18:15:34,246 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] ipc.CallRunner(144): callId: 181 service: MasterService methodName: ExecMasterService size: 83 connection: 172.31.14.131:45244 deadline: 1689532534245, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters 2023-07-16 18:15:34,247 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup foo@ 2023-07-16 18:15:34,247 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.checkGroupName(RSGroupInfoManagerImpl.java:932) at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.addRSGroup(RSGroupInfoManagerImpl.java:205) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.addRSGroup(RSGroupAdminServer.java:476) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.addRSGroup(RSGroupAdminEndpoint.java:258) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16203) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 18:15:34,247 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] ipc.CallRunner(144): callId: 183 service: MasterService methodName: ExecMasterService size: 83 connection: 172.31.14.131:45244 deadline: 1689532534247, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters 2023-07-16 18:15:34,248 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup - 2023-07-16 18:15:34,248 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.checkGroupName(RSGroupInfoManagerImpl.java:932) at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.addRSGroup(RSGroupInfoManagerImpl.java:205) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.addRSGroup(RSGroupAdminServer.java:476) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.addRSGroup(RSGroupAdminEndpoint.java:258) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16203) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 18:15:34,249 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] ipc.CallRunner(144): callId: 185 service: MasterService methodName: ExecMasterService size: 80 connection: 172.31.14.131:45244 deadline: 1689532534248, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup name should only contain alphanumeric characters 2023-07-16 18:15:34,250 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup foo_123 2023-07-16 18:15:34,253 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/foo_123 2023-07-16 18:15:34,255 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 18:15:34,255 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 18:15:34,256 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-16 18:15:34,258 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-16 18:15:34,262 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 18:15:34,262 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 18:15:34,268 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 18:15:34,268 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 18:15:34,270 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-16 18:15:34,270 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-16 18:15:34,270 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 18:15:34,271 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-16 18:15:34,271 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 18:15:34,272 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup foo_123 2023-07-16 18:15:34,276 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 18:15:34,277 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 18:15:34,277 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-16 18:15:34,279 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-16 18:15:34,281 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-16 18:15:34,281 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-16 18:15:34,281 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 18:15:34,282 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-16 18:15:34,282 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 18:15:34,283 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-16 18:15:34,287 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 18:15:34,288 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-16 18:15:34,294 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-16 18:15:34,299 INFO [Listener at localhost/38073] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-16 18:15:34,300 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-16 18:15:34,303 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 18:15:34,304 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 18:15:34,306 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-16 18:15:34,307 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-16 18:15:34,312 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 18:15:34,312 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 18:15:34,315 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:45445] to rsgroup master 2023-07-16 18:15:34,316 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45445 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 18:15:34,316 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] ipc.CallRunner(144): callId: 219 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:45244 deadline: 1689532534315, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45445 is either offline or it does not exist. 2023-07-16 18:15:34,316 WARN [Listener at localhost/38073] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45445 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45445 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-16 18:15:34,318 INFO [Listener at localhost/38073] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 18:15:34,320 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 18:15:34,320 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 18:15:34,320 INFO [Listener at localhost/38073] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33809, jenkins-hbase4.apache.org:41927, jenkins-hbase4.apache.org:43375, jenkins-hbase4.apache.org:44563], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-16 18:15:34,321 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-16 18:15:34,321 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 18:15:34,346 INFO [Listener at localhost/38073] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testValidGroupNames Thread=508 (was 505) Potentially hanging thread: hconnection-0x67375dfc-shared-pool-6 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x67375dfc-shared-pool-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x67375dfc-shared-pool-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=822 (was 822), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=429 (was 429), ProcessCount=173 (was 173), AvailableMemoryMB=3237 (was 3246) 2023-07-16 18:15:34,350 WARN [Listener at localhost/38073] hbase.ResourceChecker(130): Thread=508 is superior to 500 2023-07-16 18:15:34,373 INFO [Listener at localhost/38073] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testFailRemoveGroup Thread=508, OpenFileDescriptor=822, MaxFileDescriptor=60000, SystemLoadAverage=429, ProcessCount=173, AvailableMemoryMB=3234 2023-07-16 18:15:34,373 WARN [Listener at localhost/38073] hbase.ResourceChecker(130): Thread=508 is superior to 500 2023-07-16 18:15:34,373 INFO [Listener at localhost/38073] rsgroup.TestRSGroupsBase(132): testFailRemoveGroup 2023-07-16 18:15:34,379 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 18:15:34,380 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 18:15:34,381 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-16 18:15:34,381 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-16 18:15:34,381 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 18:15:34,382 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-16 18:15:34,383 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 18:15:34,384 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-16 18:15:34,389 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 18:15:34,390 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-16 18:15:34,393 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-16 18:15:34,400 INFO [Listener at localhost/38073] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-16 18:15:34,401 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-16 18:15:34,405 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 18:15:34,405 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 18:15:34,407 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-16 18:15:34,409 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-16 18:15:34,419 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 18:15:34,419 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 18:15:34,423 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:45445] to rsgroup master 2023-07-16 18:15:34,423 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45445 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 18:15:34,423 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] ipc.CallRunner(144): callId: 247 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:45244 deadline: 1689532534423, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45445 is either offline or it does not exist. 2023-07-16 18:15:34,424 WARN [Listener at localhost/38073] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45445 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45445 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-16 18:15:34,426 INFO [Listener at localhost/38073] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 18:15:34,427 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 18:15:34,427 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 18:15:34,427 INFO [Listener at localhost/38073] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33809, jenkins-hbase4.apache.org:41927, jenkins-hbase4.apache.org:43375, jenkins-hbase4.apache.org:44563], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-16 18:15:34,428 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-16 18:15:34,428 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 18:15:34,429 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 18:15:34,429 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 18:15:34,430 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-16 18:15:34,430 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 18:15:34,431 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup bar 2023-07-16 18:15:34,433 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 18:15:34,434 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-16 18:15:34,436 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 18:15:34,436 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-16 18:15:34,439 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-16 18:15:34,443 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 18:15:34,443 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 18:15:34,448 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:33809, jenkins-hbase4.apache.org:41927, jenkins-hbase4.apache.org:43375] to rsgroup bar 2023-07-16 18:15:34,451 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 18:15:34,452 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-16 18:15:34,452 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 18:15:34,452 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-16 18:15:34,454 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminServer(238): Moving server region 583941d24df0f42b80730ed46c98845b, which do not belong to RSGroup bar 2023-07-16 18:15:34,455 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] procedure2.ProcedureExecutor(1029): Stored pid=78, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:namespace, region=583941d24df0f42b80730ed46c98845b, REOPEN/MOVE 2023-07-16 18:15:34,455 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-16 18:15:34,456 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=78, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=hbase:namespace, region=583941d24df0f42b80730ed46c98845b, REOPEN/MOVE 2023-07-16 18:15:34,457 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=78 updating hbase:meta row=583941d24df0f42b80730ed46c98845b, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,43375,1689531323422 2023-07-16 18:15:34,457 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689531325937.583941d24df0f42b80730ed46c98845b.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689531334457"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689531334457"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689531334457"}]},"ts":"1689531334457"} 2023-07-16 18:15:34,459 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=79, ppid=78, state=RUNNABLE; CloseRegionProcedure 583941d24df0f42b80730ed46c98845b, server=jenkins-hbase4.apache.org,43375,1689531323422}] 2023-07-16 18:15:34,616 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 583941d24df0f42b80730ed46c98845b 2023-07-16 18:15:34,617 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 583941d24df0f42b80730ed46c98845b, disabling compactions & flushes 2023-07-16 18:15:34,617 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689531325937.583941d24df0f42b80730ed46c98845b. 2023-07-16 18:15:34,617 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689531325937.583941d24df0f42b80730ed46c98845b. 2023-07-16 18:15:34,617 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689531325937.583941d24df0f42b80730ed46c98845b. after waiting 0 ms 2023-07-16 18:15:34,617 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689531325937.583941d24df0f42b80730ed46c98845b. 2023-07-16 18:15:34,618 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 583941d24df0f42b80730ed46c98845b 1/1 column families, dataSize=78 B heapSize=488 B 2023-07-16 18:15:34,648 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=78 B at sequenceid=6 (bloomFilter=true), to=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/hbase/namespace/583941d24df0f42b80730ed46c98845b/.tmp/info/518184b260ee463e929f23a479dc1fdd 2023-07-16 18:15:34,668 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/hbase/namespace/583941d24df0f42b80730ed46c98845b/.tmp/info/518184b260ee463e929f23a479dc1fdd as hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/hbase/namespace/583941d24df0f42b80730ed46c98845b/info/518184b260ee463e929f23a479dc1fdd 2023-07-16 18:15:34,677 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/hbase/namespace/583941d24df0f42b80730ed46c98845b/info/518184b260ee463e929f23a479dc1fdd, entries=2, sequenceid=6, filesize=4.8 K 2023-07-16 18:15:34,678 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~78 B/78, heapSize ~472 B/472, currentSize=0 B/0 for 583941d24df0f42b80730ed46c98845b in 60ms, sequenceid=6, compaction requested=false 2023-07-16 18:15:34,702 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/hbase/namespace/583941d24df0f42b80730ed46c98845b/recovered.edits/9.seqid, newMaxSeqId=9, maxSeqId=1 2023-07-16 18:15:34,704 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689531325937.583941d24df0f42b80730ed46c98845b. 2023-07-16 18:15:34,705 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 583941d24df0f42b80730ed46c98845b: 2023-07-16 18:15:34,705 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 583941d24df0f42b80730ed46c98845b move to jenkins-hbase4.apache.org,44563,1689531327107 record at close sequenceid=6 2023-07-16 18:15:34,712 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 583941d24df0f42b80730ed46c98845b 2023-07-16 18:15:34,712 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=78 updating hbase:meta row=583941d24df0f42b80730ed46c98845b, regionState=CLOSED 2023-07-16 18:15:34,712 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"hbase:namespace,,1689531325937.583941d24df0f42b80730ed46c98845b.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689531334712"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689531334712"}]},"ts":"1689531334712"} 2023-07-16 18:15:34,716 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=79, resume processing ppid=78 2023-07-16 18:15:34,716 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=79, ppid=78, state=SUCCESS; CloseRegionProcedure 583941d24df0f42b80730ed46c98845b, server=jenkins-hbase4.apache.org,43375,1689531323422 in 255 msec 2023-07-16 18:15:34,717 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=78, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=583941d24df0f42b80730ed46c98845b, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,44563,1689531327107; forceNewPlan=false, retain=false 2023-07-16 18:15:34,867 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=78 updating hbase:meta row=583941d24df0f42b80730ed46c98845b, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,44563,1689531327107 2023-07-16 18:15:34,868 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689531325937.583941d24df0f42b80730ed46c98845b.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689531334867"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689531334867"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689531334867"}]},"ts":"1689531334867"} 2023-07-16 18:15:34,870 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=80, ppid=78, state=RUNNABLE; OpenRegionProcedure 583941d24df0f42b80730ed46c98845b, server=jenkins-hbase4.apache.org,44563,1689531327107}] 2023-07-16 18:15:35,027 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689531325937.583941d24df0f42b80730ed46c98845b. 2023-07-16 18:15:35,027 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 583941d24df0f42b80730ed46c98845b, NAME => 'hbase:namespace,,1689531325937.583941d24df0f42b80730ed46c98845b.', STARTKEY => '', ENDKEY => ''} 2023-07-16 18:15:35,027 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 583941d24df0f42b80730ed46c98845b 2023-07-16 18:15:35,028 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689531325937.583941d24df0f42b80730ed46c98845b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 18:15:35,028 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 583941d24df0f42b80730ed46c98845b 2023-07-16 18:15:35,028 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 583941d24df0f42b80730ed46c98845b 2023-07-16 18:15:35,029 INFO [StoreOpener-583941d24df0f42b80730ed46c98845b-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 583941d24df0f42b80730ed46c98845b 2023-07-16 18:15:35,031 DEBUG [StoreOpener-583941d24df0f42b80730ed46c98845b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/hbase/namespace/583941d24df0f42b80730ed46c98845b/info 2023-07-16 18:15:35,031 DEBUG [StoreOpener-583941d24df0f42b80730ed46c98845b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/hbase/namespace/583941d24df0f42b80730ed46c98845b/info 2023-07-16 18:15:35,031 INFO [StoreOpener-583941d24df0f42b80730ed46c98845b-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 583941d24df0f42b80730ed46c98845b columnFamilyName info 2023-07-16 18:15:35,047 DEBUG [StoreOpener-583941d24df0f42b80730ed46c98845b-1] regionserver.HStore(539): loaded hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/hbase/namespace/583941d24df0f42b80730ed46c98845b/info/518184b260ee463e929f23a479dc1fdd 2023-07-16 18:15:35,048 INFO [StoreOpener-583941d24df0f42b80730ed46c98845b-1] regionserver.HStore(310): Store=583941d24df0f42b80730ed46c98845b/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 18:15:35,049 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/hbase/namespace/583941d24df0f42b80730ed46c98845b 2023-07-16 18:15:35,050 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/hbase/namespace/583941d24df0f42b80730ed46c98845b 2023-07-16 18:15:35,054 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 583941d24df0f42b80730ed46c98845b 2023-07-16 18:15:35,055 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 583941d24df0f42b80730ed46c98845b; next sequenceid=10; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9475180800, jitterRate=-0.11755502223968506}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 18:15:35,055 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 583941d24df0f42b80730ed46c98845b: 2023-07-16 18:15:35,056 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689531325937.583941d24df0f42b80730ed46c98845b., pid=80, masterSystemTime=1689531335022 2023-07-16 18:15:35,058 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689531325937.583941d24df0f42b80730ed46c98845b. 2023-07-16 18:15:35,058 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689531325937.583941d24df0f42b80730ed46c98845b. 2023-07-16 18:15:35,059 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=78 updating hbase:meta row=583941d24df0f42b80730ed46c98845b, regionState=OPEN, openSeqNum=10, regionLocation=jenkins-hbase4.apache.org,44563,1689531327107 2023-07-16 18:15:35,059 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689531325937.583941d24df0f42b80730ed46c98845b.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689531335058"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689531335058"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689531335058"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689531335058"}]},"ts":"1689531335058"} 2023-07-16 18:15:35,062 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=80, resume processing ppid=78 2023-07-16 18:15:35,062 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=80, ppid=78, state=SUCCESS; OpenRegionProcedure 583941d24df0f42b80730ed46c98845b, server=jenkins-hbase4.apache.org,44563,1689531327107 in 190 msec 2023-07-16 18:15:35,064 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=78, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=583941d24df0f42b80730ed46c98845b, REOPEN/MOVE in 608 msec 2023-07-16 18:15:35,456 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] procedure.ProcedureSyncWait(216): waitFor pid=78 2023-07-16 18:15:35,457 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,33809,1689531323219, jenkins-hbase4.apache.org,41927,1689531323590, jenkins-hbase4.apache.org,43375,1689531323422] are moved back to default 2023-07-16 18:15:35,457 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminServer(438): Move servers done: default => bar 2023-07-16 18:15:35,457 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 18:15:35,460 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 18:15:35,461 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 18:15:35,464 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=bar 2023-07-16 18:15:35,464 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 18:15:35,466 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'Group_testFailRemoveGroup', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-16 18:15:35,467 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] procedure2.ProcedureExecutor(1029): Stored pid=81, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testFailRemoveGroup 2023-07-16 18:15:35,469 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=81, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-16 18:15:35,469 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "Group_testFailRemoveGroup" procId is: 81 2023-07-16 18:15:35,470 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(1230): Checking to see if procedure is done pid=81 2023-07-16 18:15:35,472 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 18:15:35,472 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-16 18:15:35,473 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 18:15:35,473 DEBUG [PEWorker-2] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-16 18:15:35,476 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=81, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-16 18:15:35,478 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/default/Group_testFailRemoveGroup/cdf96aa5eee09490e0463364c9c96e98 2023-07-16 18:15:35,478 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/default/Group_testFailRemoveGroup/cdf96aa5eee09490e0463364c9c96e98 empty. 2023-07-16 18:15:35,479 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/default/Group_testFailRemoveGroup/cdf96aa5eee09490e0463364c9c96e98 2023-07-16 18:15:35,479 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived Group_testFailRemoveGroup regions 2023-07-16 18:15:35,498 DEBUG [PEWorker-2] util.FSTableDescriptors(570): Wrote into hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/default/Group_testFailRemoveGroup/.tabledesc/.tableinfo.0000000001 2023-07-16 18:15:35,499 INFO [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => cdf96aa5eee09490e0463364c9c96e98, NAME => 'Group_testFailRemoveGroup,,1689531335465.cdf96aa5eee09490e0463364c9c96e98.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='Group_testFailRemoveGroup', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp 2023-07-16 18:15:35,511 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1689531335465.cdf96aa5eee09490e0463364c9c96e98.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 18:15:35,511 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1604): Closing cdf96aa5eee09490e0463364c9c96e98, disabling compactions & flushes 2023-07-16 18:15:35,511 INFO [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1689531335465.cdf96aa5eee09490e0463364c9c96e98. 2023-07-16 18:15:35,511 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1689531335465.cdf96aa5eee09490e0463364c9c96e98. 2023-07-16 18:15:35,511 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1689531335465.cdf96aa5eee09490e0463364c9c96e98. after waiting 0 ms 2023-07-16 18:15:35,511 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1689531335465.cdf96aa5eee09490e0463364c9c96e98. 2023-07-16 18:15:35,511 INFO [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1689531335465.cdf96aa5eee09490e0463364c9c96e98. 2023-07-16 18:15:35,511 DEBUG [RegionOpenAndInit-Group_testFailRemoveGroup-pool-0] regionserver.HRegion(1558): Region close journal for cdf96aa5eee09490e0463364c9c96e98: 2023-07-16 18:15:35,513 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=81, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-16 18:15:35,514 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1689531335465.cdf96aa5eee09490e0463364c9c96e98.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689531335514"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689531335514"}]},"ts":"1689531335514"} 2023-07-16 18:15:35,516 INFO [PEWorker-2] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-16 18:15:35,517 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=81, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-16 18:15:35,517 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689531335517"}]},"ts":"1689531335517"} 2023-07-16 18:15:35,519 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=ENABLING in hbase:meta 2023-07-16 18:15:35,527 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=82, ppid=81, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=cdf96aa5eee09490e0463364c9c96e98, ASSIGN}] 2023-07-16 18:15:35,529 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=82, ppid=81, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=cdf96aa5eee09490e0463364c9c96e98, ASSIGN 2023-07-16 18:15:35,530 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=82, ppid=81, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=cdf96aa5eee09490e0463364c9c96e98, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,44563,1689531327107; forceNewPlan=false, retain=false 2023-07-16 18:15:35,572 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(1230): Checking to see if procedure is done pid=81 2023-07-16 18:15:35,681 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=82 updating hbase:meta row=cdf96aa5eee09490e0463364c9c96e98, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,44563,1689531327107 2023-07-16 18:15:35,682 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689531335465.cdf96aa5eee09490e0463364c9c96e98.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689531335681"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689531335681"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689531335681"}]},"ts":"1689531335681"} 2023-07-16 18:15:35,683 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=83, ppid=82, state=RUNNABLE; OpenRegionProcedure cdf96aa5eee09490e0463364c9c96e98, server=jenkins-hbase4.apache.org,44563,1689531327107}] 2023-07-16 18:15:35,774 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(1230): Checking to see if procedure is done pid=81 2023-07-16 18:15:35,839 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testFailRemoveGroup,,1689531335465.cdf96aa5eee09490e0463364c9c96e98. 2023-07-16 18:15:35,839 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => cdf96aa5eee09490e0463364c9c96e98, NAME => 'Group_testFailRemoveGroup,,1689531335465.cdf96aa5eee09490e0463364c9c96e98.', STARTKEY => '', ENDKEY => ''} 2023-07-16 18:15:35,840 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testFailRemoveGroup cdf96aa5eee09490e0463364c9c96e98 2023-07-16 18:15:35,840 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1689531335465.cdf96aa5eee09490e0463364c9c96e98.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 18:15:35,840 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for cdf96aa5eee09490e0463364c9c96e98 2023-07-16 18:15:35,840 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for cdf96aa5eee09490e0463364c9c96e98 2023-07-16 18:15:35,841 INFO [StoreOpener-cdf96aa5eee09490e0463364c9c96e98-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region cdf96aa5eee09490e0463364c9c96e98 2023-07-16 18:15:35,842 DEBUG [StoreOpener-cdf96aa5eee09490e0463364c9c96e98-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/Group_testFailRemoveGroup/cdf96aa5eee09490e0463364c9c96e98/f 2023-07-16 18:15:35,843 DEBUG [StoreOpener-cdf96aa5eee09490e0463364c9c96e98-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/Group_testFailRemoveGroup/cdf96aa5eee09490e0463364c9c96e98/f 2023-07-16 18:15:35,843 INFO [StoreOpener-cdf96aa5eee09490e0463364c9c96e98-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region cdf96aa5eee09490e0463364c9c96e98 columnFamilyName f 2023-07-16 18:15:35,844 INFO [StoreOpener-cdf96aa5eee09490e0463364c9c96e98-1] regionserver.HStore(310): Store=cdf96aa5eee09490e0463364c9c96e98/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 18:15:35,844 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/Group_testFailRemoveGroup/cdf96aa5eee09490e0463364c9c96e98 2023-07-16 18:15:35,845 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/Group_testFailRemoveGroup/cdf96aa5eee09490e0463364c9c96e98 2023-07-16 18:15:35,847 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for cdf96aa5eee09490e0463364c9c96e98 2023-07-16 18:15:35,849 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/Group_testFailRemoveGroup/cdf96aa5eee09490e0463364c9c96e98/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-16 18:15:35,850 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened cdf96aa5eee09490e0463364c9c96e98; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9850245600, jitterRate=-0.0826243907213211}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 18:15:35,850 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for cdf96aa5eee09490e0463364c9c96e98: 2023-07-16 18:15:35,851 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testFailRemoveGroup,,1689531335465.cdf96aa5eee09490e0463364c9c96e98., pid=83, masterSystemTime=1689531335835 2023-07-16 18:15:35,852 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testFailRemoveGroup,,1689531335465.cdf96aa5eee09490e0463364c9c96e98. 2023-07-16 18:15:35,852 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testFailRemoveGroup,,1689531335465.cdf96aa5eee09490e0463364c9c96e98. 2023-07-16 18:15:35,853 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=82 updating hbase:meta row=cdf96aa5eee09490e0463364c9c96e98, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,44563,1689531327107 2023-07-16 18:15:35,853 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testFailRemoveGroup,,1689531335465.cdf96aa5eee09490e0463364c9c96e98.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689531335853"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689531335853"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689531335853"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689531335853"}]},"ts":"1689531335853"} 2023-07-16 18:15:35,856 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=83, resume processing ppid=82 2023-07-16 18:15:35,856 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=83, ppid=82, state=SUCCESS; OpenRegionProcedure cdf96aa5eee09490e0463364c9c96e98, server=jenkins-hbase4.apache.org,44563,1689531327107 in 171 msec 2023-07-16 18:15:35,858 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=82, resume processing ppid=81 2023-07-16 18:15:35,858 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=82, ppid=81, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=cdf96aa5eee09490e0463364c9c96e98, ASSIGN in 329 msec 2023-07-16 18:15:35,859 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=81, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-16 18:15:35,859 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689531335859"}]},"ts":"1689531335859"} 2023-07-16 18:15:35,861 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=ENABLED in hbase:meta 2023-07-16 18:15:35,864 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=81, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testFailRemoveGroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-16 18:15:35,865 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=81, state=SUCCESS; CreateTableProcedure table=Group_testFailRemoveGroup in 398 msec 2023-07-16 18:15:36,075 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(1230): Checking to see if procedure is done pid=81 2023-07-16 18:15:36,076 INFO [Listener at localhost/38073] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testFailRemoveGroup, procId: 81 completed 2023-07-16 18:15:36,076 DEBUG [Listener at localhost/38073] hbase.HBaseTestingUtility(3430): Waiting until all regions of table Group_testFailRemoveGroup get assigned. Timeout = 60000ms 2023-07-16 18:15:36,076 INFO [Listener at localhost/38073] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 18:15:36,083 INFO [Listener at localhost/38073] hbase.HBaseTestingUtility(3484): All regions for table Group_testFailRemoveGroup assigned to meta. Checking AM states. 2023-07-16 18:15:36,083 INFO [Listener at localhost/38073] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 18:15:36,083 INFO [Listener at localhost/38073] hbase.HBaseTestingUtility(3504): All regions for table Group_testFailRemoveGroup assigned. 2023-07-16 18:15:36,085 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [Group_testFailRemoveGroup] to rsgroup bar 2023-07-16 18:15:36,088 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 18:15:36,088 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-16 18:15:36,089 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 18:15:36,089 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-16 18:15:36,091 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminServer(339): Moving region(s) for table Group_testFailRemoveGroup to RSGroup bar 2023-07-16 18:15:36,091 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminServer(345): Moving region cdf96aa5eee09490e0463364c9c96e98 to RSGroup bar 2023-07-16 18:15:36,091 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-16 18:15:36,091 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-16 18:15:36,091 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-16 18:15:36,091 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-16 18:15:36,091 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] balancer.BaseLoadBalancer$Cluster(362): server 3 is on host 0 2023-07-16 18:15:36,091 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-16 18:15:36,092 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] procedure2.ProcedureExecutor(1029): Stored pid=84, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=cdf96aa5eee09490e0463364c9c96e98, REOPEN/MOVE 2023-07-16 18:15:36,093 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group bar, current retry=0 2023-07-16 18:15:36,094 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=84, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=cdf96aa5eee09490e0463364c9c96e98, REOPEN/MOVE 2023-07-16 18:15:36,095 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=84 updating hbase:meta row=cdf96aa5eee09490e0463364c9c96e98, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,44563,1689531327107 2023-07-16 18:15:36,095 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689531335465.cdf96aa5eee09490e0463364c9c96e98.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689531336095"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689531336095"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689531336095"}]},"ts":"1689531336095"} 2023-07-16 18:15:36,097 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=85, ppid=84, state=RUNNABLE; CloseRegionProcedure cdf96aa5eee09490e0463364c9c96e98, server=jenkins-hbase4.apache.org,44563,1689531327107}] 2023-07-16 18:15:36,197 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-16 18:15:36,252 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close cdf96aa5eee09490e0463364c9c96e98 2023-07-16 18:15:36,253 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing cdf96aa5eee09490e0463364c9c96e98, disabling compactions & flushes 2023-07-16 18:15:36,253 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1689531335465.cdf96aa5eee09490e0463364c9c96e98. 2023-07-16 18:15:36,253 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1689531335465.cdf96aa5eee09490e0463364c9c96e98. 2023-07-16 18:15:36,253 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1689531335465.cdf96aa5eee09490e0463364c9c96e98. after waiting 0 ms 2023-07-16 18:15:36,253 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1689531335465.cdf96aa5eee09490e0463364c9c96e98. 2023-07-16 18:15:36,261 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/Group_testFailRemoveGroup/cdf96aa5eee09490e0463364c9c96e98/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-16 18:15:36,265 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1689531335465.cdf96aa5eee09490e0463364c9c96e98. 2023-07-16 18:15:36,265 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for cdf96aa5eee09490e0463364c9c96e98: 2023-07-16 18:15:36,265 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding cdf96aa5eee09490e0463364c9c96e98 move to jenkins-hbase4.apache.org,41927,1689531323590 record at close sequenceid=2 2023-07-16 18:15:36,268 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed cdf96aa5eee09490e0463364c9c96e98 2023-07-16 18:15:36,269 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=84 updating hbase:meta row=cdf96aa5eee09490e0463364c9c96e98, regionState=CLOSED 2023-07-16 18:15:36,270 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1689531335465.cdf96aa5eee09490e0463364c9c96e98.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689531336269"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689531336269"}]},"ts":"1689531336269"} 2023-07-16 18:15:36,274 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=85, resume processing ppid=84 2023-07-16 18:15:36,274 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=85, ppid=84, state=SUCCESS; CloseRegionProcedure cdf96aa5eee09490e0463364c9c96e98, server=jenkins-hbase4.apache.org,44563,1689531327107 in 175 msec 2023-07-16 18:15:36,275 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=84, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=cdf96aa5eee09490e0463364c9c96e98, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,41927,1689531323590; forceNewPlan=false, retain=false 2023-07-16 18:15:36,426 INFO [jenkins-hbase4:45445] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-16 18:15:36,426 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=84 updating hbase:meta row=cdf96aa5eee09490e0463364c9c96e98, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41927,1689531323590 2023-07-16 18:15:36,427 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689531335465.cdf96aa5eee09490e0463364c9c96e98.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689531336426"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689531336426"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689531336426"}]},"ts":"1689531336426"} 2023-07-16 18:15:36,429 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=86, ppid=84, state=RUNNABLE; OpenRegionProcedure cdf96aa5eee09490e0463364c9c96e98, server=jenkins-hbase4.apache.org,41927,1689531323590}] 2023-07-16 18:15:36,585 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testFailRemoveGroup,,1689531335465.cdf96aa5eee09490e0463364c9c96e98. 2023-07-16 18:15:36,585 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => cdf96aa5eee09490e0463364c9c96e98, NAME => 'Group_testFailRemoveGroup,,1689531335465.cdf96aa5eee09490e0463364c9c96e98.', STARTKEY => '', ENDKEY => ''} 2023-07-16 18:15:36,586 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testFailRemoveGroup cdf96aa5eee09490e0463364c9c96e98 2023-07-16 18:15:36,586 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1689531335465.cdf96aa5eee09490e0463364c9c96e98.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 18:15:36,586 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for cdf96aa5eee09490e0463364c9c96e98 2023-07-16 18:15:36,586 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for cdf96aa5eee09490e0463364c9c96e98 2023-07-16 18:15:36,587 INFO [StoreOpener-cdf96aa5eee09490e0463364c9c96e98-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region cdf96aa5eee09490e0463364c9c96e98 2023-07-16 18:15:36,589 DEBUG [StoreOpener-cdf96aa5eee09490e0463364c9c96e98-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/Group_testFailRemoveGroup/cdf96aa5eee09490e0463364c9c96e98/f 2023-07-16 18:15:36,589 DEBUG [StoreOpener-cdf96aa5eee09490e0463364c9c96e98-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/Group_testFailRemoveGroup/cdf96aa5eee09490e0463364c9c96e98/f 2023-07-16 18:15:36,589 INFO [StoreOpener-cdf96aa5eee09490e0463364c9c96e98-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region cdf96aa5eee09490e0463364c9c96e98 columnFamilyName f 2023-07-16 18:15:36,590 INFO [StoreOpener-cdf96aa5eee09490e0463364c9c96e98-1] regionserver.HStore(310): Store=cdf96aa5eee09490e0463364c9c96e98/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 18:15:36,591 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/Group_testFailRemoveGroup/cdf96aa5eee09490e0463364c9c96e98 2023-07-16 18:15:36,593 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/Group_testFailRemoveGroup/cdf96aa5eee09490e0463364c9c96e98 2023-07-16 18:15:36,596 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for cdf96aa5eee09490e0463364c9c96e98 2023-07-16 18:15:36,597 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened cdf96aa5eee09490e0463364c9c96e98; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11143697280, jitterRate=0.0378376841545105}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 18:15:36,597 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for cdf96aa5eee09490e0463364c9c96e98: 2023-07-16 18:15:36,598 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testFailRemoveGroup,,1689531335465.cdf96aa5eee09490e0463364c9c96e98., pid=86, masterSystemTime=1689531336581 2023-07-16 18:15:36,600 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testFailRemoveGroup,,1689531335465.cdf96aa5eee09490e0463364c9c96e98. 2023-07-16 18:15:36,600 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testFailRemoveGroup,,1689531335465.cdf96aa5eee09490e0463364c9c96e98. 2023-07-16 18:15:36,600 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=84 updating hbase:meta row=cdf96aa5eee09490e0463364c9c96e98, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,41927,1689531323590 2023-07-16 18:15:36,601 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testFailRemoveGroup,,1689531335465.cdf96aa5eee09490e0463364c9c96e98.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689531336600"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689531336600"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689531336600"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689531336600"}]},"ts":"1689531336600"} 2023-07-16 18:15:36,604 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=86, resume processing ppid=84 2023-07-16 18:15:36,604 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=86, ppid=84, state=SUCCESS; OpenRegionProcedure cdf96aa5eee09490e0463364c9c96e98, server=jenkins-hbase4.apache.org,41927,1689531323590 in 173 msec 2023-07-16 18:15:36,606 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=84, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=cdf96aa5eee09490e0463364c9c96e98, REOPEN/MOVE in 512 msec 2023-07-16 18:15:37,094 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] procedure.ProcedureSyncWait(216): waitFor pid=84 2023-07-16 18:15:37,094 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testFailRemoveGroup] moved to target group bar. 2023-07-16 18:15:37,094 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 18:15:37,098 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 18:15:37,098 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 18:15:37,101 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=bar 2023-07-16 18:15:37,101 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 18:15:37,102 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup bar 2023-07-16 18:15:37,102 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 1 tables; you must remove these tables from the rsgroup before the rsgroup can be removed. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:490) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 18:15:37,102 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] ipc.CallRunner(144): callId: 285 service: MasterService methodName: ExecMasterService size: 85 connection: 172.31.14.131:45244 deadline: 1689532537102, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 1 tables; you must remove these tables from the rsgroup before the rsgroup can be removed. 2023-07-16 18:15:37,104 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:33809, jenkins-hbase4.apache.org:41927, jenkins-hbase4.apache.org:43375] to rsgroup default 2023-07-16 18:15:37,104 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Cannot leave a RSGroup bar that contains tables without servers to host them. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:428) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 18:15:37,104 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] ipc.CallRunner(144): callId: 287 service: MasterService methodName: ExecMasterService size: 188 connection: 172.31.14.131:45244 deadline: 1689532537104, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Cannot leave a RSGroup bar that contains tables without servers to host them. 2023-07-16 18:15:37,107 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [Group_testFailRemoveGroup] to rsgroup default 2023-07-16 18:15:37,111 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 18:15:37,111 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-16 18:15:37,112 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 18:15:37,112 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-16 18:15:37,114 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminServer(339): Moving region(s) for table Group_testFailRemoveGroup to RSGroup default 2023-07-16 18:15:37,114 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminServer(345): Moving region cdf96aa5eee09490e0463364c9c96e98 to RSGroup default 2023-07-16 18:15:37,115 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] procedure2.ProcedureExecutor(1029): Stored pid=87, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=cdf96aa5eee09490e0463364c9c96e98, REOPEN/MOVE 2023-07-16 18:15:37,115 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-16 18:15:37,117 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=87, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=cdf96aa5eee09490e0463364c9c96e98, REOPEN/MOVE 2023-07-16 18:15:37,117 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=87 updating hbase:meta row=cdf96aa5eee09490e0463364c9c96e98, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41927,1689531323590 2023-07-16 18:15:37,118 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689531335465.cdf96aa5eee09490e0463364c9c96e98.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689531337117"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689531337117"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689531337117"}]},"ts":"1689531337117"} 2023-07-16 18:15:37,122 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=88, ppid=87, state=RUNNABLE; CloseRegionProcedure cdf96aa5eee09490e0463364c9c96e98, server=jenkins-hbase4.apache.org,41927,1689531323590}] 2023-07-16 18:15:37,275 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close cdf96aa5eee09490e0463364c9c96e98 2023-07-16 18:15:37,279 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing cdf96aa5eee09490e0463364c9c96e98, disabling compactions & flushes 2023-07-16 18:15:37,279 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1689531335465.cdf96aa5eee09490e0463364c9c96e98. 2023-07-16 18:15:37,279 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1689531335465.cdf96aa5eee09490e0463364c9c96e98. 2023-07-16 18:15:37,279 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1689531335465.cdf96aa5eee09490e0463364c9c96e98. after waiting 0 ms 2023-07-16 18:15:37,279 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1689531335465.cdf96aa5eee09490e0463364c9c96e98. 2023-07-16 18:15:37,284 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/Group_testFailRemoveGroup/cdf96aa5eee09490e0463364c9c96e98/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-16 18:15:37,285 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1689531335465.cdf96aa5eee09490e0463364c9c96e98. 2023-07-16 18:15:37,285 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for cdf96aa5eee09490e0463364c9c96e98: 2023-07-16 18:15:37,285 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding cdf96aa5eee09490e0463364c9c96e98 move to jenkins-hbase4.apache.org,44563,1689531327107 record at close sequenceid=5 2023-07-16 18:15:37,288 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed cdf96aa5eee09490e0463364c9c96e98 2023-07-16 18:15:37,288 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=87 updating hbase:meta row=cdf96aa5eee09490e0463364c9c96e98, regionState=CLOSED 2023-07-16 18:15:37,289 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1689531335465.cdf96aa5eee09490e0463364c9c96e98.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689531337288"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689531337288"}]},"ts":"1689531337288"} 2023-07-16 18:15:37,293 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=88, resume processing ppid=87 2023-07-16 18:15:37,293 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=88, ppid=87, state=SUCCESS; CloseRegionProcedure cdf96aa5eee09490e0463364c9c96e98, server=jenkins-hbase4.apache.org,41927,1689531323590 in 171 msec 2023-07-16 18:15:37,294 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=87, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=cdf96aa5eee09490e0463364c9c96e98, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,44563,1689531327107; forceNewPlan=false, retain=false 2023-07-16 18:15:37,445 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=87 updating hbase:meta row=cdf96aa5eee09490e0463364c9c96e98, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,44563,1689531327107 2023-07-16 18:15:37,445 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689531335465.cdf96aa5eee09490e0463364c9c96e98.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689531337444"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689531337444"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689531337444"}]},"ts":"1689531337444"} 2023-07-16 18:15:37,447 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=89, ppid=87, state=RUNNABLE; OpenRegionProcedure cdf96aa5eee09490e0463364c9c96e98, server=jenkins-hbase4.apache.org,44563,1689531327107}] 2023-07-16 18:15:37,602 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testFailRemoveGroup,,1689531335465.cdf96aa5eee09490e0463364c9c96e98. 2023-07-16 18:15:37,602 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => cdf96aa5eee09490e0463364c9c96e98, NAME => 'Group_testFailRemoveGroup,,1689531335465.cdf96aa5eee09490e0463364c9c96e98.', STARTKEY => '', ENDKEY => ''} 2023-07-16 18:15:37,603 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testFailRemoveGroup cdf96aa5eee09490e0463364c9c96e98 2023-07-16 18:15:37,603 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testFailRemoveGroup,,1689531335465.cdf96aa5eee09490e0463364c9c96e98.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 18:15:37,603 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for cdf96aa5eee09490e0463364c9c96e98 2023-07-16 18:15:37,603 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for cdf96aa5eee09490e0463364c9c96e98 2023-07-16 18:15:37,604 INFO [StoreOpener-cdf96aa5eee09490e0463364c9c96e98-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region cdf96aa5eee09490e0463364c9c96e98 2023-07-16 18:15:37,605 DEBUG [StoreOpener-cdf96aa5eee09490e0463364c9c96e98-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/Group_testFailRemoveGroup/cdf96aa5eee09490e0463364c9c96e98/f 2023-07-16 18:15:37,605 DEBUG [StoreOpener-cdf96aa5eee09490e0463364c9c96e98-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/Group_testFailRemoveGroup/cdf96aa5eee09490e0463364c9c96e98/f 2023-07-16 18:15:37,606 INFO [StoreOpener-cdf96aa5eee09490e0463364c9c96e98-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region cdf96aa5eee09490e0463364c9c96e98 columnFamilyName f 2023-07-16 18:15:37,606 INFO [StoreOpener-cdf96aa5eee09490e0463364c9c96e98-1] regionserver.HStore(310): Store=cdf96aa5eee09490e0463364c9c96e98/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 18:15:37,607 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/Group_testFailRemoveGroup/cdf96aa5eee09490e0463364c9c96e98 2023-07-16 18:15:37,609 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/Group_testFailRemoveGroup/cdf96aa5eee09490e0463364c9c96e98 2023-07-16 18:15:37,612 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for cdf96aa5eee09490e0463364c9c96e98 2023-07-16 18:15:37,613 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened cdf96aa5eee09490e0463364c9c96e98; next sequenceid=8; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11128696000, jitterRate=0.03644058108329773}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 18:15:37,613 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for cdf96aa5eee09490e0463364c9c96e98: 2023-07-16 18:15:37,614 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testFailRemoveGroup,,1689531335465.cdf96aa5eee09490e0463364c9c96e98., pid=89, masterSystemTime=1689531337598 2023-07-16 18:15:37,615 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testFailRemoveGroup,,1689531335465.cdf96aa5eee09490e0463364c9c96e98. 2023-07-16 18:15:37,615 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testFailRemoveGroup,,1689531335465.cdf96aa5eee09490e0463364c9c96e98. 2023-07-16 18:15:37,615 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=87 updating hbase:meta row=cdf96aa5eee09490e0463364c9c96e98, regionState=OPEN, openSeqNum=8, regionLocation=jenkins-hbase4.apache.org,44563,1689531327107 2023-07-16 18:15:37,616 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testFailRemoveGroup,,1689531335465.cdf96aa5eee09490e0463364c9c96e98.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689531337615"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689531337615"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689531337615"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689531337615"}]},"ts":"1689531337615"} 2023-07-16 18:15:37,618 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=89, resume processing ppid=87 2023-07-16 18:15:37,618 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=89, ppid=87, state=SUCCESS; OpenRegionProcedure cdf96aa5eee09490e0463364c9c96e98, server=jenkins-hbase4.apache.org,44563,1689531327107 in 171 msec 2023-07-16 18:15:37,620 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=87, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=cdf96aa5eee09490e0463364c9c96e98, REOPEN/MOVE in 504 msec 2023-07-16 18:15:38,116 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] procedure.ProcedureSyncWait(216): waitFor pid=87 2023-07-16 18:15:38,116 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testFailRemoveGroup] moved to target group default. 2023-07-16 18:15:38,116 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 18:15:38,120 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 18:15:38,120 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 18:15:38,123 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup bar 2023-07-16 18:15:38,124 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 3 servers; you must remove these servers from the RSGroup beforethe RSGroup can be removed. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:496) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 18:15:38,124 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] ipc.CallRunner(144): callId: 294 service: MasterService methodName: ExecMasterService size: 85 connection: 172.31.14.131:45244 deadline: 1689532538123, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bar has 3 servers; you must remove these servers from the RSGroup beforethe RSGroup can be removed. 2023-07-16 18:15:38,125 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:33809, jenkins-hbase4.apache.org:41927, jenkins-hbase4.apache.org:43375] to rsgroup default 2023-07-16 18:15:38,127 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 18:15:38,128 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/bar 2023-07-16 18:15:38,128 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 18:15:38,129 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-16 18:15:38,130 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group bar, current retry=0 2023-07-16 18:15:38,130 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,33809,1689531323219, jenkins-hbase4.apache.org,41927,1689531323590, jenkins-hbase4.apache.org,43375,1689531323422] are moved back to bar 2023-07-16 18:15:38,131 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminServer(438): Move servers done: bar => default 2023-07-16 18:15:38,131 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 18:15:38,134 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 18:15:38,134 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 18:15:38,136 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup bar 2023-07-16 18:15:38,137 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=43375] ipc.CallRunner(144): callId: 214 service: ClientService methodName: Scan size: 147 connection: 172.31.14.131:51414 deadline: 1689531398137, exception=org.apache.hadoop.hbase.exceptions.RegionMovedException: Region moved to: hostname=jenkins-hbase4.apache.org port=44563 startCode=1689531327107. As of locationSeqNum=6. 2023-07-16 18:15:38,249 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 18:15:38,250 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 18:15:38,250 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-16 18:15:38,252 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-16 18:15:38,255 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 18:15:38,255 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 18:15:38,257 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 18:15:38,257 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 18:15:38,259 INFO [Listener at localhost/38073] client.HBaseAdmin$15(890): Started disable of Group_testFailRemoveGroup 2023-07-16 18:15:38,260 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testFailRemoveGroup 2023-07-16 18:15:38,261 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] procedure2.ProcedureExecutor(1029): Stored pid=90, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testFailRemoveGroup 2023-07-16 18:15:38,263 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(1230): Checking to see if procedure is done pid=90 2023-07-16 18:15:38,264 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689531338264"}]},"ts":"1689531338264"} 2023-07-16 18:15:38,265 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=DISABLING in hbase:meta 2023-07-16 18:15:38,268 INFO [PEWorker-3] procedure.DisableTableProcedure(293): Set Group_testFailRemoveGroup to state=DISABLING 2023-07-16 18:15:38,269 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=91, ppid=90, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=cdf96aa5eee09490e0463364c9c96e98, UNASSIGN}] 2023-07-16 18:15:38,271 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=91, ppid=90, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=cdf96aa5eee09490e0463364c9c96e98, UNASSIGN 2023-07-16 18:15:38,272 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=91 updating hbase:meta row=cdf96aa5eee09490e0463364c9c96e98, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,44563,1689531327107 2023-07-16 18:15:38,272 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testFailRemoveGroup,,1689531335465.cdf96aa5eee09490e0463364c9c96e98.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689531338272"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689531338272"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689531338272"}]},"ts":"1689531338272"} 2023-07-16 18:15:38,273 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=92, ppid=91, state=RUNNABLE; CloseRegionProcedure cdf96aa5eee09490e0463364c9c96e98, server=jenkins-hbase4.apache.org,44563,1689531327107}] 2023-07-16 18:15:38,365 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(1230): Checking to see if procedure is done pid=90 2023-07-16 18:15:38,425 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close cdf96aa5eee09490e0463364c9c96e98 2023-07-16 18:15:38,426 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing cdf96aa5eee09490e0463364c9c96e98, disabling compactions & flushes 2023-07-16 18:15:38,426 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testFailRemoveGroup,,1689531335465.cdf96aa5eee09490e0463364c9c96e98. 2023-07-16 18:15:38,426 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testFailRemoveGroup,,1689531335465.cdf96aa5eee09490e0463364c9c96e98. 2023-07-16 18:15:38,426 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testFailRemoveGroup,,1689531335465.cdf96aa5eee09490e0463364c9c96e98. after waiting 0 ms 2023-07-16 18:15:38,426 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testFailRemoveGroup,,1689531335465.cdf96aa5eee09490e0463364c9c96e98. 2023-07-16 18:15:38,431 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/Group_testFailRemoveGroup/cdf96aa5eee09490e0463364c9c96e98/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=7 2023-07-16 18:15:38,432 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testFailRemoveGroup,,1689531335465.cdf96aa5eee09490e0463364c9c96e98. 2023-07-16 18:15:38,432 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for cdf96aa5eee09490e0463364c9c96e98: 2023-07-16 18:15:38,433 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed cdf96aa5eee09490e0463364c9c96e98 2023-07-16 18:15:38,434 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=91 updating hbase:meta row=cdf96aa5eee09490e0463364c9c96e98, regionState=CLOSED 2023-07-16 18:15:38,434 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testFailRemoveGroup,,1689531335465.cdf96aa5eee09490e0463364c9c96e98.","families":{"info":[{"qualifier":"regioninfo","vlen":59,"tag":[],"timestamp":"1689531338434"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689531338434"}]},"ts":"1689531338434"} 2023-07-16 18:15:38,437 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=92, resume processing ppid=91 2023-07-16 18:15:38,437 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=92, ppid=91, state=SUCCESS; CloseRegionProcedure cdf96aa5eee09490e0463364c9c96e98, server=jenkins-hbase4.apache.org,44563,1689531327107 in 162 msec 2023-07-16 18:15:38,439 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=91, resume processing ppid=90 2023-07-16 18:15:38,439 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=91, ppid=90, state=SUCCESS; TransitRegionStateProcedure table=Group_testFailRemoveGroup, region=cdf96aa5eee09490e0463364c9c96e98, UNASSIGN in 168 msec 2023-07-16 18:15:38,440 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689531338440"}]},"ts":"1689531338440"} 2023-07-16 18:15:38,441 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testFailRemoveGroup, state=DISABLED in hbase:meta 2023-07-16 18:15:38,443 INFO [PEWorker-3] procedure.DisableTableProcedure(305): Set Group_testFailRemoveGroup to state=DISABLED 2023-07-16 18:15:38,445 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=90, state=SUCCESS; DisableTableProcedure table=Group_testFailRemoveGroup in 184 msec 2023-07-16 18:15:38,566 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(1230): Checking to see if procedure is done pid=90 2023-07-16 18:15:38,566 INFO [Listener at localhost/38073] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testFailRemoveGroup, procId: 90 completed 2023-07-16 18:15:38,567 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete Group_testFailRemoveGroup 2023-07-16 18:15:38,568 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] procedure2.ProcedureExecutor(1029): Stored pid=93, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-16 18:15:38,570 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=93, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-16 18:15:38,570 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testFailRemoveGroup' from rsgroup 'default' 2023-07-16 18:15:38,570 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=93, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-16 18:15:38,572 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 18:15:38,572 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 18:15:38,573 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-16 18:15:38,575 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/default/Group_testFailRemoveGroup/cdf96aa5eee09490e0463364c9c96e98 2023-07-16 18:15:38,575 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(1230): Checking to see if procedure is done pid=93 2023-07-16 18:15:38,577 DEBUG [HFileArchiver-6] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/default/Group_testFailRemoveGroup/cdf96aa5eee09490e0463364c9c96e98/f, FileablePath, hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/default/Group_testFailRemoveGroup/cdf96aa5eee09490e0463364c9c96e98/recovered.edits] 2023-07-16 18:15:38,582 DEBUG [HFileArchiver-6] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/default/Group_testFailRemoveGroup/cdf96aa5eee09490e0463364c9c96e98/recovered.edits/10.seqid to hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/archive/data/default/Group_testFailRemoveGroup/cdf96aa5eee09490e0463364c9c96e98/recovered.edits/10.seqid 2023-07-16 18:15:38,583 DEBUG [HFileArchiver-6] backup.HFileArchiver(596): Deleted hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/default/Group_testFailRemoveGroup/cdf96aa5eee09490e0463364c9c96e98 2023-07-16 18:15:38,583 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived Group_testFailRemoveGroup regions 2023-07-16 18:15:38,586 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=93, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-16 18:15:38,588 WARN [PEWorker-2] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of Group_testFailRemoveGroup from hbase:meta 2023-07-16 18:15:38,590 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(421): Removing 'Group_testFailRemoveGroup' descriptor. 2023-07-16 18:15:38,591 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=93, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-16 18:15:38,591 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(411): Removing 'Group_testFailRemoveGroup' from region states. 2023-07-16 18:15:38,591 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testFailRemoveGroup,,1689531335465.cdf96aa5eee09490e0463364c9c96e98.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689531338591"}]},"ts":"9223372036854775807"} 2023-07-16 18:15:38,593 INFO [PEWorker-2] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-16 18:15:38,593 DEBUG [PEWorker-2] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => cdf96aa5eee09490e0463364c9c96e98, NAME => 'Group_testFailRemoveGroup,,1689531335465.cdf96aa5eee09490e0463364c9c96e98.', STARTKEY => '', ENDKEY => ''}] 2023-07-16 18:15:38,593 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(415): Marking 'Group_testFailRemoveGroup' as deleted. 2023-07-16 18:15:38,593 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testFailRemoveGroup","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689531338593"}]},"ts":"9223372036854775807"} 2023-07-16 18:15:38,595 INFO [PEWorker-2] hbase.MetaTableAccessor(1658): Deleted table Group_testFailRemoveGroup state from META 2023-07-16 18:15:38,597 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(130): Finished pid=93, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testFailRemoveGroup 2023-07-16 18:15:38,598 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=93, state=SUCCESS; DeleteTableProcedure table=Group_testFailRemoveGroup in 30 msec 2023-07-16 18:15:38,676 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(1230): Checking to see if procedure is done pid=93 2023-07-16 18:15:38,677 INFO [Listener at localhost/38073] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testFailRemoveGroup, procId: 93 completed 2023-07-16 18:15:38,681 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 18:15:38,681 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 18:15:38,682 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-16 18:15:38,682 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-16 18:15:38,682 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 18:15:38,683 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-16 18:15:38,683 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 18:15:38,684 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-16 18:15:38,687 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 18:15:38,688 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-16 18:15:38,694 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-16 18:15:38,697 INFO [Listener at localhost/38073] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-16 18:15:38,697 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-16 18:15:38,700 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 18:15:38,700 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 18:15:38,701 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-16 18:15:38,703 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-16 18:15:38,708 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 18:15:38,708 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 18:15:38,710 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:45445] to rsgroup master 2023-07-16 18:15:38,710 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45445 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 18:15:38,710 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] ipc.CallRunner(144): callId: 342 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:45244 deadline: 1689532538710, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45445 is either offline or it does not exist. 2023-07-16 18:15:38,711 WARN [Listener at localhost/38073] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45445 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45445 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-16 18:15:38,712 INFO [Listener at localhost/38073] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 18:15:38,713 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 18:15:38,713 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 18:15:38,714 INFO [Listener at localhost/38073] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33809, jenkins-hbase4.apache.org:41927, jenkins-hbase4.apache.org:43375, jenkins-hbase4.apache.org:44563], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-16 18:15:38,714 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-16 18:15:38,714 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 18:15:38,733 INFO [Listener at localhost/38073] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testFailRemoveGroup Thread=514 (was 508) Potentially hanging thread: hconnection-0x67375dfc-shared-pool-9 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x13eab7c0-shared-pool-14 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-563003686_17 at /127.0.0.1:60290 [Waiting for operation #5] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x67375dfc-shared-pool-12 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x67375dfc-shared-pool-11 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x13eab7c0-shared-pool-18 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_2036412677_17 at /127.0.0.1:44192 [Waiting for operation #9] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x13eab7c0-shared-pool-13 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x67375dfc-shared-pool-10 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x1a923ff9-shared-pool-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1ac4d196-f4be-3944-d331-ea30eb493ba6/cluster_87eeeeb5-5880-a92c-ac20-b2d9553ef3c2/dfs/data/data6/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x13eab7c0-shared-pool-17 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1ac4d196-f4be-3944-d331-ea30eb493ba6/cluster_87eeeeb5-5880-a92c-ac20-b2d9553ef3c2/dfs/data/data5/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1ac4d196-f4be-3944-d331-ea30eb493ba6/cluster_87eeeeb5-5880-a92c-ac20-b2d9553ef3c2/dfs/data/data1/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-563003686_17 at /127.0.0.1:60324 [Waiting for operation #4] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x13eab7c0-shared-pool-16 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_239416957_17 at /127.0.0.1:44164 [Waiting for operation #13] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Async disk worker #0 for volume /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1ac4d196-f4be-3944-d331-ea30eb493ba6/cluster_87eeeeb5-5880-a92c-ac20-b2d9553ef3c2/dfs/data/data2/current sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x13eab7c0-shared-pool-15 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=825 (was 822) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=418 (was 429), ProcessCount=173 (was 173), AvailableMemoryMB=3094 (was 3234) 2023-07-16 18:15:38,733 WARN [Listener at localhost/38073] hbase.ResourceChecker(130): Thread=514 is superior to 500 2023-07-16 18:15:38,750 INFO [Listener at localhost/38073] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testMultiTableMove Thread=514, OpenFileDescriptor=825, MaxFileDescriptor=60000, SystemLoadAverage=418, ProcessCount=173, AvailableMemoryMB=3093 2023-07-16 18:15:38,750 WARN [Listener at localhost/38073] hbase.ResourceChecker(130): Thread=514 is superior to 500 2023-07-16 18:15:38,750 INFO [Listener at localhost/38073] rsgroup.TestRSGroupsBase(132): testMultiTableMove 2023-07-16 18:15:38,754 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 18:15:38,754 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 18:15:38,755 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-16 18:15:38,755 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-16 18:15:38,755 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 18:15:38,756 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-16 18:15:38,756 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 18:15:38,757 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-16 18:15:38,760 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 18:15:38,761 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-16 18:15:38,762 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-16 18:15:38,765 INFO [Listener at localhost/38073] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-16 18:15:38,765 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-16 18:15:38,767 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 18:15:38,768 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 18:15:38,770 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-16 18:15:38,772 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-16 18:15:38,774 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 18:15:38,775 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 18:15:38,776 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:45445] to rsgroup master 2023-07-16 18:15:38,777 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45445 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 18:15:38,777 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] ipc.CallRunner(144): callId: 370 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:45244 deadline: 1689532538776, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45445 is either offline or it does not exist. 2023-07-16 18:15:38,777 WARN [Listener at localhost/38073] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45445 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45445 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-16 18:15:38,781 INFO [Listener at localhost/38073] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 18:15:38,781 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 18:15:38,781 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 18:15:38,782 INFO [Listener at localhost/38073] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33809, jenkins-hbase4.apache.org:41927, jenkins-hbase4.apache.org:43375, jenkins-hbase4.apache.org:44563], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-16 18:15:38,782 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-16 18:15:38,782 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 18:15:38,783 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-16 18:15:38,783 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 18:15:38,784 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_testMultiTableMove_704176438 2023-07-16 18:15:38,786 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 18:15:38,786 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 18:15:38,786 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_704176438 2023-07-16 18:15:38,788 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-16 18:15:38,789 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-16 18:15:38,792 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 18:15:38,792 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 18:15:38,794 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:33809] to rsgroup Group_testMultiTableMove_704176438 2023-07-16 18:15:38,796 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 18:15:38,797 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_704176438 2023-07-16 18:15:38,797 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 18:15:38,797 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-16 18:15:38,802 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-16 18:15:38,802 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,33809,1689531323219] are moved back to default 2023-07-16 18:15:38,802 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminServer(438): Move servers done: default => Group_testMultiTableMove_704176438 2023-07-16 18:15:38,802 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 18:15:38,804 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 18:15:38,805 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 18:15:38,807 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testMultiTableMove_704176438 2023-07-16 18:15:38,807 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 18:15:38,810 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'GrouptestMultiTableMoveA', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-16 18:15:38,811 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] procedure2.ProcedureExecutor(1029): Stored pid=94, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=GrouptestMultiTableMoveA 2023-07-16 18:15:38,813 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=94, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_PRE_OPERATION 2023-07-16 18:15:38,813 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "GrouptestMultiTableMoveA" procId is: 94 2023-07-16 18:15:38,814 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(1230): Checking to see if procedure is done pid=94 2023-07-16 18:15:38,816 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 18:15:38,816 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_704176438 2023-07-16 18:15:38,817 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 18:15:38,818 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-16 18:15:38,823 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=94, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-16 18:15:38,824 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/default/GrouptestMultiTableMoveA/1f2bfaf1a29ac65f21f1a8da128fcc6f 2023-07-16 18:15:38,825 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/default/GrouptestMultiTableMoveA/1f2bfaf1a29ac65f21f1a8da128fcc6f empty. 2023-07-16 18:15:38,826 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/default/GrouptestMultiTableMoveA/1f2bfaf1a29ac65f21f1a8da128fcc6f 2023-07-16 18:15:38,826 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveA regions 2023-07-16 18:15:38,842 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/default/GrouptestMultiTableMoveA/.tabledesc/.tableinfo.0000000001 2023-07-16 18:15:38,843 INFO [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(7675): creating {ENCODED => 1f2bfaf1a29ac65f21f1a8da128fcc6f, NAME => 'GrouptestMultiTableMoveA,,1689531338809.1f2bfaf1a29ac65f21f1a8da128fcc6f.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='GrouptestMultiTableMoveA', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp 2023-07-16 18:15:38,858 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveA,,1689531338809.1f2bfaf1a29ac65f21f1a8da128fcc6f.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 18:15:38,858 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1604): Closing 1f2bfaf1a29ac65f21f1a8da128fcc6f, disabling compactions & flushes 2023-07-16 18:15:38,858 INFO [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveA,,1689531338809.1f2bfaf1a29ac65f21f1a8da128fcc6f. 2023-07-16 18:15:38,858 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveA,,1689531338809.1f2bfaf1a29ac65f21f1a8da128fcc6f. 2023-07-16 18:15:38,858 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveA,,1689531338809.1f2bfaf1a29ac65f21f1a8da128fcc6f. after waiting 0 ms 2023-07-16 18:15:38,858 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveA,,1689531338809.1f2bfaf1a29ac65f21f1a8da128fcc6f. 2023-07-16 18:15:38,858 INFO [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveA,,1689531338809.1f2bfaf1a29ac65f21f1a8da128fcc6f. 2023-07-16 18:15:38,858 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveA-pool-0] regionserver.HRegion(1558): Region close journal for 1f2bfaf1a29ac65f21f1a8da128fcc6f: 2023-07-16 18:15:38,861 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=94, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_ADD_TO_META 2023-07-16 18:15:38,862 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveA,,1689531338809.1f2bfaf1a29ac65f21f1a8da128fcc6f.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689531338862"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689531338862"}]},"ts":"1689531338862"} 2023-07-16 18:15:38,864 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-16 18:15:38,864 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=94, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-16 18:15:38,865 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689531338865"}]},"ts":"1689531338865"} 2023-07-16 18:15:38,870 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=ENABLING in hbase:meta 2023-07-16 18:15:38,874 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-16 18:15:38,874 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-16 18:15:38,874 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-16 18:15:38,874 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-16 18:15:38,874 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-16 18:15:38,875 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=95, ppid=94, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=1f2bfaf1a29ac65f21f1a8da128fcc6f, ASSIGN}] 2023-07-16 18:15:38,877 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=95, ppid=94, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=1f2bfaf1a29ac65f21f1a8da128fcc6f, ASSIGN 2023-07-16 18:15:38,877 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=95, ppid=94, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=1f2bfaf1a29ac65f21f1a8da128fcc6f, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,44563,1689531327107; forceNewPlan=false, retain=false 2023-07-16 18:15:38,916 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(1230): Checking to see if procedure is done pid=94 2023-07-16 18:15:39,028 INFO [jenkins-hbase4:45445] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-16 18:15:39,029 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=95 updating hbase:meta row=1f2bfaf1a29ac65f21f1a8da128fcc6f, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,44563,1689531327107 2023-07-16 18:15:39,029 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1689531338809.1f2bfaf1a29ac65f21f1a8da128fcc6f.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689531339029"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689531339029"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689531339029"}]},"ts":"1689531339029"} 2023-07-16 18:15:39,031 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=96, ppid=95, state=RUNNABLE; OpenRegionProcedure 1f2bfaf1a29ac65f21f1a8da128fcc6f, server=jenkins-hbase4.apache.org,44563,1689531327107}] 2023-07-16 18:15:39,117 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(1230): Checking to see if procedure is done pid=94 2023-07-16 18:15:39,187 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveA,,1689531338809.1f2bfaf1a29ac65f21f1a8da128fcc6f. 2023-07-16 18:15:39,187 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1f2bfaf1a29ac65f21f1a8da128fcc6f, NAME => 'GrouptestMultiTableMoveA,,1689531338809.1f2bfaf1a29ac65f21f1a8da128fcc6f.', STARTKEY => '', ENDKEY => ''} 2023-07-16 18:15:39,188 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveA 1f2bfaf1a29ac65f21f1a8da128fcc6f 2023-07-16 18:15:39,188 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveA,,1689531338809.1f2bfaf1a29ac65f21f1a8da128fcc6f.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 18:15:39,188 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1f2bfaf1a29ac65f21f1a8da128fcc6f 2023-07-16 18:15:39,188 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1f2bfaf1a29ac65f21f1a8da128fcc6f 2023-07-16 18:15:39,189 INFO [StoreOpener-1f2bfaf1a29ac65f21f1a8da128fcc6f-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 1f2bfaf1a29ac65f21f1a8da128fcc6f 2023-07-16 18:15:39,191 DEBUG [StoreOpener-1f2bfaf1a29ac65f21f1a8da128fcc6f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/GrouptestMultiTableMoveA/1f2bfaf1a29ac65f21f1a8da128fcc6f/f 2023-07-16 18:15:39,191 DEBUG [StoreOpener-1f2bfaf1a29ac65f21f1a8da128fcc6f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/GrouptestMultiTableMoveA/1f2bfaf1a29ac65f21f1a8da128fcc6f/f 2023-07-16 18:15:39,191 INFO [StoreOpener-1f2bfaf1a29ac65f21f1a8da128fcc6f-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1f2bfaf1a29ac65f21f1a8da128fcc6f columnFamilyName f 2023-07-16 18:15:39,192 INFO [StoreOpener-1f2bfaf1a29ac65f21f1a8da128fcc6f-1] regionserver.HStore(310): Store=1f2bfaf1a29ac65f21f1a8da128fcc6f/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 18:15:39,193 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/GrouptestMultiTableMoveA/1f2bfaf1a29ac65f21f1a8da128fcc6f 2023-07-16 18:15:39,193 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/GrouptestMultiTableMoveA/1f2bfaf1a29ac65f21f1a8da128fcc6f 2023-07-16 18:15:39,196 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1f2bfaf1a29ac65f21f1a8da128fcc6f 2023-07-16 18:15:39,199 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/GrouptestMultiTableMoveA/1f2bfaf1a29ac65f21f1a8da128fcc6f/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-16 18:15:39,199 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1f2bfaf1a29ac65f21f1a8da128fcc6f; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9966890560, jitterRate=-0.07176098227500916}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 18:15:39,199 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1f2bfaf1a29ac65f21f1a8da128fcc6f: 2023-07-16 18:15:39,200 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveA,,1689531338809.1f2bfaf1a29ac65f21f1a8da128fcc6f., pid=96, masterSystemTime=1689531339183 2023-07-16 18:15:39,202 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveA,,1689531338809.1f2bfaf1a29ac65f21f1a8da128fcc6f. 2023-07-16 18:15:39,202 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveA,,1689531338809.1f2bfaf1a29ac65f21f1a8da128fcc6f. 2023-07-16 18:15:39,202 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=95 updating hbase:meta row=1f2bfaf1a29ac65f21f1a8da128fcc6f, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,44563,1689531327107 2023-07-16 18:15:39,202 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveA,,1689531338809.1f2bfaf1a29ac65f21f1a8da128fcc6f.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689531339202"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689531339202"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689531339202"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689531339202"}]},"ts":"1689531339202"} 2023-07-16 18:15:39,205 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=96, resume processing ppid=95 2023-07-16 18:15:39,205 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=96, ppid=95, state=SUCCESS; OpenRegionProcedure 1f2bfaf1a29ac65f21f1a8da128fcc6f, server=jenkins-hbase4.apache.org,44563,1689531327107 in 173 msec 2023-07-16 18:15:39,207 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=95, resume processing ppid=94 2023-07-16 18:15:39,207 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=95, ppid=94, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=1f2bfaf1a29ac65f21f1a8da128fcc6f, ASSIGN in 330 msec 2023-07-16 18:15:39,208 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=94, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-16 18:15:39,208 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689531339208"}]},"ts":"1689531339208"} 2023-07-16 18:15:39,209 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=ENABLED in hbase:meta 2023-07-16 18:15:39,211 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=94, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveA execute state=CREATE_TABLE_POST_OPERATION 2023-07-16 18:15:39,212 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=94, state=SUCCESS; CreateTableProcedure table=GrouptestMultiTableMoveA in 401 msec 2023-07-16 18:15:39,380 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-07-16 18:15:39,381 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'GrouptestMultiTableMoveA' 2023-07-16 18:15:39,419 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(1230): Checking to see if procedure is done pid=94 2023-07-16 18:15:39,419 INFO [Listener at localhost/38073] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:GrouptestMultiTableMoveA, procId: 94 completed 2023-07-16 18:15:39,419 DEBUG [Listener at localhost/38073] hbase.HBaseTestingUtility(3430): Waiting until all regions of table GrouptestMultiTableMoveA get assigned. Timeout = 60000ms 2023-07-16 18:15:39,419 INFO [Listener at localhost/38073] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 18:15:39,430 INFO [Listener at localhost/38073] hbase.HBaseTestingUtility(3484): All regions for table GrouptestMultiTableMoveA assigned to meta. Checking AM states. 2023-07-16 18:15:39,430 INFO [Listener at localhost/38073] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 18:15:39,431 INFO [Listener at localhost/38073] hbase.HBaseTestingUtility(3504): All regions for table GrouptestMultiTableMoveA assigned. 2023-07-16 18:15:39,433 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'GrouptestMultiTableMoveB', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-16 18:15:39,434 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] procedure2.ProcedureExecutor(1029): Stored pid=97, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=GrouptestMultiTableMoveB 2023-07-16 18:15:39,436 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_PRE_OPERATION 2023-07-16 18:15:39,436 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "GrouptestMultiTableMoveB" procId is: 97 2023-07-16 18:15:39,438 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(1230): Checking to see if procedure is done pid=97 2023-07-16 18:15:39,441 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 18:15:39,624 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(1230): Checking to see if procedure is done pid=97 2023-07-16 18:15:39,625 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_704176438 2023-07-16 18:15:39,626 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 18:15:39,626 DEBUG [PEWorker-1] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-16 18:15:39,630 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-16 18:15:39,631 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/default/GrouptestMultiTableMoveB/57006fc6fe079fd17e6390e8ec3e5988 2023-07-16 18:15:39,632 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/default/GrouptestMultiTableMoveB/57006fc6fe079fd17e6390e8ec3e5988 empty. 2023-07-16 18:15:39,633 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/default/GrouptestMultiTableMoveB/57006fc6fe079fd17e6390e8ec3e5988 2023-07-16 18:15:39,633 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveB regions 2023-07-16 18:15:39,648 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/default/GrouptestMultiTableMoveB/.tabledesc/.tableinfo.0000000001 2023-07-16 18:15:39,650 INFO [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(7675): creating {ENCODED => 57006fc6fe079fd17e6390e8ec3e5988, NAME => 'GrouptestMultiTableMoveB,,1689531339432.57006fc6fe079fd17e6390e8ec3e5988.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='GrouptestMultiTableMoveB', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp 2023-07-16 18:15:39,665 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveB,,1689531339432.57006fc6fe079fd17e6390e8ec3e5988.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 18:15:39,665 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1604): Closing 57006fc6fe079fd17e6390e8ec3e5988, disabling compactions & flushes 2023-07-16 18:15:39,666 INFO [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveB,,1689531339432.57006fc6fe079fd17e6390e8ec3e5988. 2023-07-16 18:15:39,666 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveB,,1689531339432.57006fc6fe079fd17e6390e8ec3e5988. 2023-07-16 18:15:39,666 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveB,,1689531339432.57006fc6fe079fd17e6390e8ec3e5988. after waiting 0 ms 2023-07-16 18:15:39,666 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveB,,1689531339432.57006fc6fe079fd17e6390e8ec3e5988. 2023-07-16 18:15:39,666 INFO [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveB,,1689531339432.57006fc6fe079fd17e6390e8ec3e5988. 2023-07-16 18:15:39,666 DEBUG [RegionOpenAndInit-GrouptestMultiTableMoveB-pool-0] regionserver.HRegion(1558): Region close journal for 57006fc6fe079fd17e6390e8ec3e5988: 2023-07-16 18:15:39,668 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_ADD_TO_META 2023-07-16 18:15:39,670 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveB,,1689531339432.57006fc6fe079fd17e6390e8ec3e5988.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689531339669"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689531339669"}]},"ts":"1689531339669"} 2023-07-16 18:15:39,671 INFO [PEWorker-1] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-16 18:15:39,672 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-16 18:15:39,672 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689531339672"}]},"ts":"1689531339672"} 2023-07-16 18:15:39,673 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=ENABLING in hbase:meta 2023-07-16 18:15:39,678 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-16 18:15:39,678 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-16 18:15:39,678 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-16 18:15:39,678 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-16 18:15:39,678 DEBUG [PEWorker-1] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-16 18:15:39,678 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=98, ppid=97, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=57006fc6fe079fd17e6390e8ec3e5988, ASSIGN}] 2023-07-16 18:15:39,681 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=98, ppid=97, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=57006fc6fe079fd17e6390e8ec3e5988, ASSIGN 2023-07-16 18:15:39,682 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=98, ppid=97, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=57006fc6fe079fd17e6390e8ec3e5988, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,41927,1689531323590; forceNewPlan=false, retain=false 2023-07-16 18:15:39,825 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(1230): Checking to see if procedure is done pid=97 2023-07-16 18:15:39,832 INFO [jenkins-hbase4:45445] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-16 18:15:39,834 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=98 updating hbase:meta row=57006fc6fe079fd17e6390e8ec3e5988, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,41927,1689531323590 2023-07-16 18:15:39,834 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1689531339432.57006fc6fe079fd17e6390e8ec3e5988.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689531339834"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689531339834"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689531339834"}]},"ts":"1689531339834"} 2023-07-16 18:15:39,836 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=99, ppid=98, state=RUNNABLE; OpenRegionProcedure 57006fc6fe079fd17e6390e8ec3e5988, server=jenkins-hbase4.apache.org,41927,1689531323590}] 2023-07-16 18:15:39,992 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveB,,1689531339432.57006fc6fe079fd17e6390e8ec3e5988. 2023-07-16 18:15:39,992 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 57006fc6fe079fd17e6390e8ec3e5988, NAME => 'GrouptestMultiTableMoveB,,1689531339432.57006fc6fe079fd17e6390e8ec3e5988.', STARTKEY => '', ENDKEY => ''} 2023-07-16 18:15:39,992 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveB 57006fc6fe079fd17e6390e8ec3e5988 2023-07-16 18:15:39,992 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveB,,1689531339432.57006fc6fe079fd17e6390e8ec3e5988.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 18:15:39,992 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 57006fc6fe079fd17e6390e8ec3e5988 2023-07-16 18:15:39,992 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 57006fc6fe079fd17e6390e8ec3e5988 2023-07-16 18:15:39,994 INFO [StoreOpener-57006fc6fe079fd17e6390e8ec3e5988-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 57006fc6fe079fd17e6390e8ec3e5988 2023-07-16 18:15:39,995 DEBUG [StoreOpener-57006fc6fe079fd17e6390e8ec3e5988-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/GrouptestMultiTableMoveB/57006fc6fe079fd17e6390e8ec3e5988/f 2023-07-16 18:15:39,995 DEBUG [StoreOpener-57006fc6fe079fd17e6390e8ec3e5988-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/GrouptestMultiTableMoveB/57006fc6fe079fd17e6390e8ec3e5988/f 2023-07-16 18:15:39,996 INFO [StoreOpener-57006fc6fe079fd17e6390e8ec3e5988-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 57006fc6fe079fd17e6390e8ec3e5988 columnFamilyName f 2023-07-16 18:15:39,996 INFO [StoreOpener-57006fc6fe079fd17e6390e8ec3e5988-1] regionserver.HStore(310): Store=57006fc6fe079fd17e6390e8ec3e5988/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 18:15:39,997 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/GrouptestMultiTableMoveB/57006fc6fe079fd17e6390e8ec3e5988 2023-07-16 18:15:39,998 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/GrouptestMultiTableMoveB/57006fc6fe079fd17e6390e8ec3e5988 2023-07-16 18:15:40,001 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 57006fc6fe079fd17e6390e8ec3e5988 2023-07-16 18:15:40,003 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/GrouptestMultiTableMoveB/57006fc6fe079fd17e6390e8ec3e5988/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-16 18:15:40,004 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 57006fc6fe079fd17e6390e8ec3e5988; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10900214080, jitterRate=0.01516154408454895}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 18:15:40,004 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 57006fc6fe079fd17e6390e8ec3e5988: 2023-07-16 18:15:40,006 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveB,,1689531339432.57006fc6fe079fd17e6390e8ec3e5988., pid=99, masterSystemTime=1689531339988 2023-07-16 18:15:40,008 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveB,,1689531339432.57006fc6fe079fd17e6390e8ec3e5988. 2023-07-16 18:15:40,008 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveB,,1689531339432.57006fc6fe079fd17e6390e8ec3e5988. 2023-07-16 18:15:40,008 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=98 updating hbase:meta row=57006fc6fe079fd17e6390e8ec3e5988, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,41927,1689531323590 2023-07-16 18:15:40,008 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveB,,1689531339432.57006fc6fe079fd17e6390e8ec3e5988.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689531340008"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689531340008"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689531340008"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689531340008"}]},"ts":"1689531340008"} 2023-07-16 18:15:40,012 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=99, resume processing ppid=98 2023-07-16 18:15:40,012 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=99, ppid=98, state=SUCCESS; OpenRegionProcedure 57006fc6fe079fd17e6390e8ec3e5988, server=jenkins-hbase4.apache.org,41927,1689531323590 in 174 msec 2023-07-16 18:15:40,014 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=98, resume processing ppid=97 2023-07-16 18:15:40,014 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=98, ppid=97, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=57006fc6fe079fd17e6390e8ec3e5988, ASSIGN in 334 msec 2023-07-16 18:15:40,014 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-16 18:15:40,014 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689531340014"}]},"ts":"1689531340014"} 2023-07-16 18:15:40,016 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=ENABLED in hbase:meta 2023-07-16 18:15:40,018 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=97, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=GrouptestMultiTableMoveB execute state=CREATE_TABLE_POST_OPERATION 2023-07-16 18:15:40,020 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=97, state=SUCCESS; CreateTableProcedure table=GrouptestMultiTableMoveB in 585 msec 2023-07-16 18:15:40,126 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(1230): Checking to see if procedure is done pid=97 2023-07-16 18:15:40,127 INFO [Listener at localhost/38073] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:GrouptestMultiTableMoveB, procId: 97 completed 2023-07-16 18:15:40,127 DEBUG [Listener at localhost/38073] hbase.HBaseTestingUtility(3430): Waiting until all regions of table GrouptestMultiTableMoveB get assigned. Timeout = 60000ms 2023-07-16 18:15:40,127 INFO [Listener at localhost/38073] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 18:15:40,133 INFO [Listener at localhost/38073] hbase.HBaseTestingUtility(3484): All regions for table GrouptestMultiTableMoveB assigned to meta. Checking AM states. 2023-07-16 18:15:40,133 INFO [Listener at localhost/38073] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 18:15:40,133 INFO [Listener at localhost/38073] hbase.HBaseTestingUtility(3504): All regions for table GrouptestMultiTableMoveB assigned. 2023-07-16 18:15:40,134 INFO [Listener at localhost/38073] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 18:15:40,157 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveA 2023-07-16 18:15:40,157 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-16 18:15:40,158 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveB 2023-07-16 18:15:40,158 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-16 18:15:40,159 INFO [Listener at localhost/38073] rsgroup.TestRSGroupsAdmin1(262): Moving table [GrouptestMultiTableMoveA,GrouptestMultiTableMoveB] to Group_testMultiTableMove_704176438 2023-07-16 18:15:40,162 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [GrouptestMultiTableMoveB, GrouptestMultiTableMoveA] to rsgroup Group_testMultiTableMove_704176438 2023-07-16 18:15:40,165 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 18:15:40,166 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_704176438 2023-07-16 18:15:40,166 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 18:15:40,167 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-16 18:15:40,169 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminServer(339): Moving region(s) for table GrouptestMultiTableMoveB to RSGroup Group_testMultiTableMove_704176438 2023-07-16 18:15:40,169 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminServer(345): Moving region 57006fc6fe079fd17e6390e8ec3e5988 to RSGroup Group_testMultiTableMove_704176438 2023-07-16 18:15:40,171 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] procedure2.ProcedureExecutor(1029): Stored pid=100, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=57006fc6fe079fd17e6390e8ec3e5988, REOPEN/MOVE 2023-07-16 18:15:40,171 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminServer(339): Moving region(s) for table GrouptestMultiTableMoveA to RSGroup Group_testMultiTableMove_704176438 2023-07-16 18:15:40,171 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminServer(345): Moving region 1f2bfaf1a29ac65f21f1a8da128fcc6f to RSGroup Group_testMultiTableMove_704176438 2023-07-16 18:15:40,173 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=100, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=57006fc6fe079fd17e6390e8ec3e5988, REOPEN/MOVE 2023-07-16 18:15:40,179 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] procedure2.ProcedureExecutor(1029): Stored pid=101, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=1f2bfaf1a29ac65f21f1a8da128fcc6f, REOPEN/MOVE 2023-07-16 18:15:40,179 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=100 updating hbase:meta row=57006fc6fe079fd17e6390e8ec3e5988, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,41927,1689531323590 2023-07-16 18:15:40,180 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=101, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=1f2bfaf1a29ac65f21f1a8da128fcc6f, REOPEN/MOVE 2023-07-16 18:15:40,179 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminServer(286): Moving 2 region(s) to group Group_testMultiTableMove_704176438, current retry=0 2023-07-16 18:15:40,181 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1689531339432.57006fc6fe079fd17e6390e8ec3e5988.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689531340179"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689531340179"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689531340179"}]},"ts":"1689531340179"} 2023-07-16 18:15:40,182 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=101 updating hbase:meta row=1f2bfaf1a29ac65f21f1a8da128fcc6f, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,44563,1689531327107 2023-07-16 18:15:40,182 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1689531338809.1f2bfaf1a29ac65f21f1a8da128fcc6f.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689531340182"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689531340182"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689531340182"}]},"ts":"1689531340182"} 2023-07-16 18:15:40,183 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=102, ppid=100, state=RUNNABLE; CloseRegionProcedure 57006fc6fe079fd17e6390e8ec3e5988, server=jenkins-hbase4.apache.org,41927,1689531323590}] 2023-07-16 18:15:40,184 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=103, ppid=101, state=RUNNABLE; CloseRegionProcedure 1f2bfaf1a29ac65f21f1a8da128fcc6f, server=jenkins-hbase4.apache.org,44563,1689531327107}] 2023-07-16 18:15:40,337 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 57006fc6fe079fd17e6390e8ec3e5988 2023-07-16 18:15:40,338 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 57006fc6fe079fd17e6390e8ec3e5988, disabling compactions & flushes 2023-07-16 18:15:40,339 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveB,,1689531339432.57006fc6fe079fd17e6390e8ec3e5988. 2023-07-16 18:15:40,339 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveB,,1689531339432.57006fc6fe079fd17e6390e8ec3e5988. 2023-07-16 18:15:40,339 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveB,,1689531339432.57006fc6fe079fd17e6390e8ec3e5988. after waiting 0 ms 2023-07-16 18:15:40,339 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveB,,1689531339432.57006fc6fe079fd17e6390e8ec3e5988. 2023-07-16 18:15:40,339 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 1f2bfaf1a29ac65f21f1a8da128fcc6f 2023-07-16 18:15:40,340 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1f2bfaf1a29ac65f21f1a8da128fcc6f, disabling compactions & flushes 2023-07-16 18:15:40,340 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveA,,1689531338809.1f2bfaf1a29ac65f21f1a8da128fcc6f. 2023-07-16 18:15:40,340 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveA,,1689531338809.1f2bfaf1a29ac65f21f1a8da128fcc6f. 2023-07-16 18:15:40,341 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveA,,1689531338809.1f2bfaf1a29ac65f21f1a8da128fcc6f. after waiting 0 ms 2023-07-16 18:15:40,341 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveA,,1689531338809.1f2bfaf1a29ac65f21f1a8da128fcc6f. 2023-07-16 18:15:40,347 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/GrouptestMultiTableMoveB/57006fc6fe079fd17e6390e8ec3e5988/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-16 18:15:40,351 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/GrouptestMultiTableMoveA/1f2bfaf1a29ac65f21f1a8da128fcc6f/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-16 18:15:40,351 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveA,,1689531338809.1f2bfaf1a29ac65f21f1a8da128fcc6f. 2023-07-16 18:15:40,351 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1f2bfaf1a29ac65f21f1a8da128fcc6f: 2023-07-16 18:15:40,351 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 1f2bfaf1a29ac65f21f1a8da128fcc6f move to jenkins-hbase4.apache.org,33809,1689531323219 record at close sequenceid=2 2023-07-16 18:15:40,353 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveB,,1689531339432.57006fc6fe079fd17e6390e8ec3e5988. 2023-07-16 18:15:40,353 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 57006fc6fe079fd17e6390e8ec3e5988: 2023-07-16 18:15:40,353 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 57006fc6fe079fd17e6390e8ec3e5988 move to jenkins-hbase4.apache.org,33809,1689531323219 record at close sequenceid=2 2023-07-16 18:15:40,354 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=101 updating hbase:meta row=1f2bfaf1a29ac65f21f1a8da128fcc6f, regionState=CLOSED 2023-07-16 18:15:40,354 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveA,,1689531338809.1f2bfaf1a29ac65f21f1a8da128fcc6f.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689531340354"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689531340354"}]},"ts":"1689531340354"} 2023-07-16 18:15:40,355 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 1f2bfaf1a29ac65f21f1a8da128fcc6f 2023-07-16 18:15:40,356 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 57006fc6fe079fd17e6390e8ec3e5988 2023-07-16 18:15:40,356 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=100 updating hbase:meta row=57006fc6fe079fd17e6390e8ec3e5988, regionState=CLOSED 2023-07-16 18:15:40,357 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveB,,1689531339432.57006fc6fe079fd17e6390e8ec3e5988.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689531340356"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689531340356"}]},"ts":"1689531340356"} 2023-07-16 18:15:40,361 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=103, resume processing ppid=101 2023-07-16 18:15:40,361 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=102, resume processing ppid=100 2023-07-16 18:15:40,361 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=103, ppid=101, state=SUCCESS; CloseRegionProcedure 1f2bfaf1a29ac65f21f1a8da128fcc6f, server=jenkins-hbase4.apache.org,44563,1689531327107 in 173 msec 2023-07-16 18:15:40,361 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=102, ppid=100, state=SUCCESS; CloseRegionProcedure 57006fc6fe079fd17e6390e8ec3e5988, server=jenkins-hbase4.apache.org,41927,1689531323590 in 175 msec 2023-07-16 18:15:40,362 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=101, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=1f2bfaf1a29ac65f21f1a8da128fcc6f, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,33809,1689531323219; forceNewPlan=false, retain=false 2023-07-16 18:15:40,362 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=100, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=57006fc6fe079fd17e6390e8ec3e5988, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,33809,1689531323219; forceNewPlan=false, retain=false 2023-07-16 18:15:40,512 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=100 updating hbase:meta row=57006fc6fe079fd17e6390e8ec3e5988, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,33809,1689531323219 2023-07-16 18:15:40,512 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=101 updating hbase:meta row=1f2bfaf1a29ac65f21f1a8da128fcc6f, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,33809,1689531323219 2023-07-16 18:15:40,513 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1689531339432.57006fc6fe079fd17e6390e8ec3e5988.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689531340512"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689531340512"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689531340512"}]},"ts":"1689531340512"} 2023-07-16 18:15:40,513 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1689531338809.1f2bfaf1a29ac65f21f1a8da128fcc6f.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689531340512"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689531340512"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689531340512"}]},"ts":"1689531340512"} 2023-07-16 18:15:40,514 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=104, ppid=101, state=RUNNABLE; OpenRegionProcedure 1f2bfaf1a29ac65f21f1a8da128fcc6f, server=jenkins-hbase4.apache.org,33809,1689531323219}] 2023-07-16 18:15:40,515 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=105, ppid=100, state=RUNNABLE; OpenRegionProcedure 57006fc6fe079fd17e6390e8ec3e5988, server=jenkins-hbase4.apache.org,33809,1689531323219}] 2023-07-16 18:15:40,670 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveA,,1689531338809.1f2bfaf1a29ac65f21f1a8da128fcc6f. 2023-07-16 18:15:40,670 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1f2bfaf1a29ac65f21f1a8da128fcc6f, NAME => 'GrouptestMultiTableMoveA,,1689531338809.1f2bfaf1a29ac65f21f1a8da128fcc6f.', STARTKEY => '', ENDKEY => ''} 2023-07-16 18:15:40,671 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveA 1f2bfaf1a29ac65f21f1a8da128fcc6f 2023-07-16 18:15:40,671 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveA,,1689531338809.1f2bfaf1a29ac65f21f1a8da128fcc6f.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 18:15:40,671 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1f2bfaf1a29ac65f21f1a8da128fcc6f 2023-07-16 18:15:40,671 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1f2bfaf1a29ac65f21f1a8da128fcc6f 2023-07-16 18:15:40,672 INFO [StoreOpener-1f2bfaf1a29ac65f21f1a8da128fcc6f-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 1f2bfaf1a29ac65f21f1a8da128fcc6f 2023-07-16 18:15:40,673 DEBUG [StoreOpener-1f2bfaf1a29ac65f21f1a8da128fcc6f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/GrouptestMultiTableMoveA/1f2bfaf1a29ac65f21f1a8da128fcc6f/f 2023-07-16 18:15:40,673 DEBUG [StoreOpener-1f2bfaf1a29ac65f21f1a8da128fcc6f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/GrouptestMultiTableMoveA/1f2bfaf1a29ac65f21f1a8da128fcc6f/f 2023-07-16 18:15:40,674 INFO [StoreOpener-1f2bfaf1a29ac65f21f1a8da128fcc6f-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1f2bfaf1a29ac65f21f1a8da128fcc6f columnFamilyName f 2023-07-16 18:15:40,674 INFO [StoreOpener-1f2bfaf1a29ac65f21f1a8da128fcc6f-1] regionserver.HStore(310): Store=1f2bfaf1a29ac65f21f1a8da128fcc6f/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 18:15:40,675 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/GrouptestMultiTableMoveA/1f2bfaf1a29ac65f21f1a8da128fcc6f 2023-07-16 18:15:40,677 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/GrouptestMultiTableMoveA/1f2bfaf1a29ac65f21f1a8da128fcc6f 2023-07-16 18:15:40,680 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1f2bfaf1a29ac65f21f1a8da128fcc6f 2023-07-16 18:15:40,681 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1f2bfaf1a29ac65f21f1a8da128fcc6f; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9925023040, jitterRate=-0.07566019892692566}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 18:15:40,681 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1f2bfaf1a29ac65f21f1a8da128fcc6f: 2023-07-16 18:15:40,682 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveA,,1689531338809.1f2bfaf1a29ac65f21f1a8da128fcc6f., pid=104, masterSystemTime=1689531340666 2023-07-16 18:15:40,683 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveA,,1689531338809.1f2bfaf1a29ac65f21f1a8da128fcc6f. 2023-07-16 18:15:40,683 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveA,,1689531338809.1f2bfaf1a29ac65f21f1a8da128fcc6f. 2023-07-16 18:15:40,683 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open GrouptestMultiTableMoveB,,1689531339432.57006fc6fe079fd17e6390e8ec3e5988. 2023-07-16 18:15:40,684 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 57006fc6fe079fd17e6390e8ec3e5988, NAME => 'GrouptestMultiTableMoveB,,1689531339432.57006fc6fe079fd17e6390e8ec3e5988.', STARTKEY => '', ENDKEY => ''} 2023-07-16 18:15:40,684 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=101 updating hbase:meta row=1f2bfaf1a29ac65f21f1a8da128fcc6f, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,33809,1689531323219 2023-07-16 18:15:40,684 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table GrouptestMultiTableMoveB 57006fc6fe079fd17e6390e8ec3e5988 2023-07-16 18:15:40,684 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated GrouptestMultiTableMoveB,,1689531339432.57006fc6fe079fd17e6390e8ec3e5988.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 18:15:40,684 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveA,,1689531338809.1f2bfaf1a29ac65f21f1a8da128fcc6f.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689531340684"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689531340684"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689531340684"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689531340684"}]},"ts":"1689531340684"} 2023-07-16 18:15:40,684 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 57006fc6fe079fd17e6390e8ec3e5988 2023-07-16 18:15:40,684 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 57006fc6fe079fd17e6390e8ec3e5988 2023-07-16 18:15:40,686 INFO [StoreOpener-57006fc6fe079fd17e6390e8ec3e5988-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 57006fc6fe079fd17e6390e8ec3e5988 2023-07-16 18:15:40,687 DEBUG [StoreOpener-57006fc6fe079fd17e6390e8ec3e5988-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/GrouptestMultiTableMoveB/57006fc6fe079fd17e6390e8ec3e5988/f 2023-07-16 18:15:40,687 DEBUG [StoreOpener-57006fc6fe079fd17e6390e8ec3e5988-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/GrouptestMultiTableMoveB/57006fc6fe079fd17e6390e8ec3e5988/f 2023-07-16 18:15:40,687 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=104, resume processing ppid=101 2023-07-16 18:15:40,687 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=104, ppid=101, state=SUCCESS; OpenRegionProcedure 1f2bfaf1a29ac65f21f1a8da128fcc6f, server=jenkins-hbase4.apache.org,33809,1689531323219 in 172 msec 2023-07-16 18:15:40,688 INFO [StoreOpener-57006fc6fe079fd17e6390e8ec3e5988-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 57006fc6fe079fd17e6390e8ec3e5988 columnFamilyName f 2023-07-16 18:15:40,688 INFO [StoreOpener-57006fc6fe079fd17e6390e8ec3e5988-1] regionserver.HStore(310): Store=57006fc6fe079fd17e6390e8ec3e5988/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 18:15:40,689 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=101, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=1f2bfaf1a29ac65f21f1a8da128fcc6f, REOPEN/MOVE in 516 msec 2023-07-16 18:15:40,690 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/GrouptestMultiTableMoveB/57006fc6fe079fd17e6390e8ec3e5988 2023-07-16 18:15:40,691 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/GrouptestMultiTableMoveB/57006fc6fe079fd17e6390e8ec3e5988 2023-07-16 18:15:40,694 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 57006fc6fe079fd17e6390e8ec3e5988 2023-07-16 18:15:40,695 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 57006fc6fe079fd17e6390e8ec3e5988; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11156539360, jitterRate=0.03903369605541229}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 18:15:40,695 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 57006fc6fe079fd17e6390e8ec3e5988: 2023-07-16 18:15:40,696 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for GrouptestMultiTableMoveB,,1689531339432.57006fc6fe079fd17e6390e8ec3e5988., pid=105, masterSystemTime=1689531340666 2023-07-16 18:15:40,699 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for GrouptestMultiTableMoveB,,1689531339432.57006fc6fe079fd17e6390e8ec3e5988. 2023-07-16 18:15:40,699 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened GrouptestMultiTableMoveB,,1689531339432.57006fc6fe079fd17e6390e8ec3e5988. 2023-07-16 18:15:40,700 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=100 updating hbase:meta row=57006fc6fe079fd17e6390e8ec3e5988, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,33809,1689531323219 2023-07-16 18:15:40,700 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"GrouptestMultiTableMoveB,,1689531339432.57006fc6fe079fd17e6390e8ec3e5988.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689531340700"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689531340700"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689531340700"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689531340700"}]},"ts":"1689531340700"} 2023-07-16 18:15:40,704 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=105, resume processing ppid=100 2023-07-16 18:15:40,704 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=105, ppid=100, state=SUCCESS; OpenRegionProcedure 57006fc6fe079fd17e6390e8ec3e5988, server=jenkins-hbase4.apache.org,33809,1689531323219 in 187 msec 2023-07-16 18:15:40,705 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=100, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=57006fc6fe079fd17e6390e8ec3e5988, REOPEN/MOVE in 535 msec 2023-07-16 18:15:41,181 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] procedure.ProcedureSyncWait(216): waitFor pid=100 2023-07-16 18:15:41,181 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminServer(369): All regions from table(s) [GrouptestMultiTableMoveB, GrouptestMultiTableMoveA] moved to target group Group_testMultiTableMove_704176438. 2023-07-16 18:15:41,181 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 18:15:41,185 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 18:15:41,185 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 18:15:41,187 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveA 2023-07-16 18:15:41,187 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-16 18:15:41,188 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestMultiTableMoveB 2023-07-16 18:15:41,188 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-16 18:15:41,189 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-16 18:15:41,189 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 18:15:41,190 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testMultiTableMove_704176438 2023-07-16 18:15:41,190 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 18:15:41,192 INFO [Listener at localhost/38073] client.HBaseAdmin$15(890): Started disable of GrouptestMultiTableMoveA 2023-07-16 18:15:41,192 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable GrouptestMultiTableMoveA 2023-07-16 18:15:41,193 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] procedure2.ProcedureExecutor(1029): Stored pid=106, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=GrouptestMultiTableMoveA 2023-07-16 18:15:41,196 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(1230): Checking to see if procedure is done pid=106 2023-07-16 18:15:41,196 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689531341196"}]},"ts":"1689531341196"} 2023-07-16 18:15:41,198 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=DISABLING in hbase:meta 2023-07-16 18:15:41,199 INFO [PEWorker-5] procedure.DisableTableProcedure(293): Set GrouptestMultiTableMoveA to state=DISABLING 2023-07-16 18:15:41,203 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=107, ppid=106, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=1f2bfaf1a29ac65f21f1a8da128fcc6f, UNASSIGN}] 2023-07-16 18:15:41,205 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=107, ppid=106, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=1f2bfaf1a29ac65f21f1a8da128fcc6f, UNASSIGN 2023-07-16 18:15:41,206 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=107 updating hbase:meta row=1f2bfaf1a29ac65f21f1a8da128fcc6f, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,33809,1689531323219 2023-07-16 18:15:41,206 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveA,,1689531338809.1f2bfaf1a29ac65f21f1a8da128fcc6f.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689531341206"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689531341206"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689531341206"}]},"ts":"1689531341206"} 2023-07-16 18:15:41,208 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=108, ppid=107, state=RUNNABLE; CloseRegionProcedure 1f2bfaf1a29ac65f21f1a8da128fcc6f, server=jenkins-hbase4.apache.org,33809,1689531323219}] 2023-07-16 18:15:41,297 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(1230): Checking to see if procedure is done pid=106 2023-07-16 18:15:41,342 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-16 18:15:41,360 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 1f2bfaf1a29ac65f21f1a8da128fcc6f 2023-07-16 18:15:41,361 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1f2bfaf1a29ac65f21f1a8da128fcc6f, disabling compactions & flushes 2023-07-16 18:15:41,361 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveA,,1689531338809.1f2bfaf1a29ac65f21f1a8da128fcc6f. 2023-07-16 18:15:41,361 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveA,,1689531338809.1f2bfaf1a29ac65f21f1a8da128fcc6f. 2023-07-16 18:15:41,362 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveA,,1689531338809.1f2bfaf1a29ac65f21f1a8da128fcc6f. after waiting 0 ms 2023-07-16 18:15:41,362 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveA,,1689531338809.1f2bfaf1a29ac65f21f1a8da128fcc6f. 2023-07-16 18:15:41,367 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/GrouptestMultiTableMoveA/1f2bfaf1a29ac65f21f1a8da128fcc6f/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-16 18:15:41,368 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveA,,1689531338809.1f2bfaf1a29ac65f21f1a8da128fcc6f. 2023-07-16 18:15:41,368 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1f2bfaf1a29ac65f21f1a8da128fcc6f: 2023-07-16 18:15:41,370 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 1f2bfaf1a29ac65f21f1a8da128fcc6f 2023-07-16 18:15:41,370 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=107 updating hbase:meta row=1f2bfaf1a29ac65f21f1a8da128fcc6f, regionState=CLOSED 2023-07-16 18:15:41,370 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveA,,1689531338809.1f2bfaf1a29ac65f21f1a8da128fcc6f.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689531341370"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689531341370"}]},"ts":"1689531341370"} 2023-07-16 18:15:41,373 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=108, resume processing ppid=107 2023-07-16 18:15:41,373 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=108, ppid=107, state=SUCCESS; CloseRegionProcedure 1f2bfaf1a29ac65f21f1a8da128fcc6f, server=jenkins-hbase4.apache.org,33809,1689531323219 in 164 msec 2023-07-16 18:15:41,375 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=107, resume processing ppid=106 2023-07-16 18:15:41,375 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=107, ppid=106, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveA, region=1f2bfaf1a29ac65f21f1a8da128fcc6f, UNASSIGN in 173 msec 2023-07-16 18:15:41,375 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689531341375"}]},"ts":"1689531341375"} 2023-07-16 18:15:41,377 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveA, state=DISABLED in hbase:meta 2023-07-16 18:15:41,379 INFO [PEWorker-5] procedure.DisableTableProcedure(305): Set GrouptestMultiTableMoveA to state=DISABLED 2023-07-16 18:15:41,381 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=106, state=SUCCESS; DisableTableProcedure table=GrouptestMultiTableMoveA in 187 msec 2023-07-16 18:15:41,498 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(1230): Checking to see if procedure is done pid=106 2023-07-16 18:15:41,499 INFO [Listener at localhost/38073] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:GrouptestMultiTableMoveA, procId: 106 completed 2023-07-16 18:15:41,499 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete GrouptestMultiTableMoveA 2023-07-16 18:15:41,500 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] procedure2.ProcedureExecutor(1029): Stored pid=109, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-16 18:15:41,502 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=109, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-16 18:15:41,502 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'GrouptestMultiTableMoveA' from rsgroup 'Group_testMultiTableMove_704176438' 2023-07-16 18:15:41,503 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=109, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-16 18:15:41,505 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 18:15:41,505 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_704176438 2023-07-16 18:15:41,506 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 18:15:41,506 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-16 18:15:41,507 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/default/GrouptestMultiTableMoveA/1f2bfaf1a29ac65f21f1a8da128fcc6f 2023-07-16 18:15:41,509 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(1230): Checking to see if procedure is done pid=109 2023-07-16 18:15:41,509 DEBUG [HFileArchiver-8] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/default/GrouptestMultiTableMoveA/1f2bfaf1a29ac65f21f1a8da128fcc6f/f, FileablePath, hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/default/GrouptestMultiTableMoveA/1f2bfaf1a29ac65f21f1a8da128fcc6f/recovered.edits] 2023-07-16 18:15:41,515 DEBUG [HFileArchiver-8] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/default/GrouptestMultiTableMoveA/1f2bfaf1a29ac65f21f1a8da128fcc6f/recovered.edits/7.seqid to hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/archive/data/default/GrouptestMultiTableMoveA/1f2bfaf1a29ac65f21f1a8da128fcc6f/recovered.edits/7.seqid 2023-07-16 18:15:41,515 DEBUG [HFileArchiver-8] backup.HFileArchiver(596): Deleted hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/default/GrouptestMultiTableMoveA/1f2bfaf1a29ac65f21f1a8da128fcc6f 2023-07-16 18:15:41,515 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveA regions 2023-07-16 18:15:41,518 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=109, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-16 18:15:41,520 WARN [PEWorker-2] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of GrouptestMultiTableMoveA from hbase:meta 2023-07-16 18:15:41,521 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(421): Removing 'GrouptestMultiTableMoveA' descriptor. 2023-07-16 18:15:41,522 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=109, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-16 18:15:41,522 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(411): Removing 'GrouptestMultiTableMoveA' from region states. 2023-07-16 18:15:41,522 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveA,,1689531338809.1f2bfaf1a29ac65f21f1a8da128fcc6f.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689531341522"}]},"ts":"9223372036854775807"} 2023-07-16 18:15:41,524 INFO [PEWorker-2] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-16 18:15:41,524 DEBUG [PEWorker-2] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 1f2bfaf1a29ac65f21f1a8da128fcc6f, NAME => 'GrouptestMultiTableMoveA,,1689531338809.1f2bfaf1a29ac65f21f1a8da128fcc6f.', STARTKEY => '', ENDKEY => ''}] 2023-07-16 18:15:41,524 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(415): Marking 'GrouptestMultiTableMoveA' as deleted. 2023-07-16 18:15:41,524 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveA","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689531341524"}]},"ts":"9223372036854775807"} 2023-07-16 18:15:41,525 INFO [PEWorker-2] hbase.MetaTableAccessor(1658): Deleted table GrouptestMultiTableMoveA state from META 2023-07-16 18:15:41,527 DEBUG [PEWorker-2] procedure.DeleteTableProcedure(130): Finished pid=109, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveA 2023-07-16 18:15:41,528 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=109, state=SUCCESS; DeleteTableProcedure table=GrouptestMultiTableMoveA in 27 msec 2023-07-16 18:15:41,610 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(1230): Checking to see if procedure is done pid=109 2023-07-16 18:15:41,610 INFO [Listener at localhost/38073] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:GrouptestMultiTableMoveA, procId: 109 completed 2023-07-16 18:15:41,611 INFO [Listener at localhost/38073] client.HBaseAdmin$15(890): Started disable of GrouptestMultiTableMoveB 2023-07-16 18:15:41,611 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable GrouptestMultiTableMoveB 2023-07-16 18:15:41,612 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] procedure2.ProcedureExecutor(1029): Stored pid=110, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=GrouptestMultiTableMoveB 2023-07-16 18:15:41,615 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(1230): Checking to see if procedure is done pid=110 2023-07-16 18:15:41,615 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689531341615"}]},"ts":"1689531341615"} 2023-07-16 18:15:41,617 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=DISABLING in hbase:meta 2023-07-16 18:15:41,619 INFO [PEWorker-1] procedure.DisableTableProcedure(293): Set GrouptestMultiTableMoveB to state=DISABLING 2023-07-16 18:15:41,620 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=111, ppid=110, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=57006fc6fe079fd17e6390e8ec3e5988, UNASSIGN}] 2023-07-16 18:15:41,622 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=111, ppid=110, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=57006fc6fe079fd17e6390e8ec3e5988, UNASSIGN 2023-07-16 18:15:41,622 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=111 updating hbase:meta row=57006fc6fe079fd17e6390e8ec3e5988, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,33809,1689531323219 2023-07-16 18:15:41,622 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"GrouptestMultiTableMoveB,,1689531339432.57006fc6fe079fd17e6390e8ec3e5988.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689531341622"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689531341622"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689531341622"}]},"ts":"1689531341622"} 2023-07-16 18:15:41,624 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=112, ppid=111, state=RUNNABLE; CloseRegionProcedure 57006fc6fe079fd17e6390e8ec3e5988, server=jenkins-hbase4.apache.org,33809,1689531323219}] 2023-07-16 18:15:41,716 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(1230): Checking to see if procedure is done pid=110 2023-07-16 18:15:41,777 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 57006fc6fe079fd17e6390e8ec3e5988 2023-07-16 18:15:41,778 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 57006fc6fe079fd17e6390e8ec3e5988, disabling compactions & flushes 2023-07-16 18:15:41,778 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region GrouptestMultiTableMoveB,,1689531339432.57006fc6fe079fd17e6390e8ec3e5988. 2023-07-16 18:15:41,778 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on GrouptestMultiTableMoveB,,1689531339432.57006fc6fe079fd17e6390e8ec3e5988. 2023-07-16 18:15:41,778 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on GrouptestMultiTableMoveB,,1689531339432.57006fc6fe079fd17e6390e8ec3e5988. after waiting 0 ms 2023-07-16 18:15:41,778 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region GrouptestMultiTableMoveB,,1689531339432.57006fc6fe079fd17e6390e8ec3e5988. 2023-07-16 18:15:41,783 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/GrouptestMultiTableMoveB/57006fc6fe079fd17e6390e8ec3e5988/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-16 18:15:41,785 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed GrouptestMultiTableMoveB,,1689531339432.57006fc6fe079fd17e6390e8ec3e5988. 2023-07-16 18:15:41,785 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 57006fc6fe079fd17e6390e8ec3e5988: 2023-07-16 18:15:41,786 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 57006fc6fe079fd17e6390e8ec3e5988 2023-07-16 18:15:41,787 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=111 updating hbase:meta row=57006fc6fe079fd17e6390e8ec3e5988, regionState=CLOSED 2023-07-16 18:15:41,787 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"GrouptestMultiTableMoveB,,1689531339432.57006fc6fe079fd17e6390e8ec3e5988.","families":{"info":[{"qualifier":"regioninfo","vlen":58,"tag":[],"timestamp":"1689531341787"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689531341787"}]},"ts":"1689531341787"} 2023-07-16 18:15:41,790 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=112, resume processing ppid=111 2023-07-16 18:15:41,790 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=112, ppid=111, state=SUCCESS; CloseRegionProcedure 57006fc6fe079fd17e6390e8ec3e5988, server=jenkins-hbase4.apache.org,33809,1689531323219 in 165 msec 2023-07-16 18:15:41,792 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=111, resume processing ppid=110 2023-07-16 18:15:41,792 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=111, ppid=110, state=SUCCESS; TransitRegionStateProcedure table=GrouptestMultiTableMoveB, region=57006fc6fe079fd17e6390e8ec3e5988, UNASSIGN in 170 msec 2023-07-16 18:15:41,793 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689531341792"}]},"ts":"1689531341792"} 2023-07-16 18:15:41,794 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=GrouptestMultiTableMoveB, state=DISABLED in hbase:meta 2023-07-16 18:15:41,795 INFO [PEWorker-1] procedure.DisableTableProcedure(305): Set GrouptestMultiTableMoveB to state=DISABLED 2023-07-16 18:15:41,798 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=110, state=SUCCESS; DisableTableProcedure table=GrouptestMultiTableMoveB in 185 msec 2023-07-16 18:15:41,917 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(1230): Checking to see if procedure is done pid=110 2023-07-16 18:15:41,918 INFO [Listener at localhost/38073] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:GrouptestMultiTableMoveB, procId: 110 completed 2023-07-16 18:15:41,919 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete GrouptestMultiTableMoveB 2023-07-16 18:15:41,920 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] procedure2.ProcedureExecutor(1029): Stored pid=113, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-16 18:15:41,921 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=113, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-16 18:15:41,922 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'GrouptestMultiTableMoveB' from rsgroup 'Group_testMultiTableMove_704176438' 2023-07-16 18:15:41,922 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=113, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-16 18:15:41,924 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 18:15:41,925 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_704176438 2023-07-16 18:15:41,925 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 18:15:41,925 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-16 18:15:41,927 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/default/GrouptestMultiTableMoveB/57006fc6fe079fd17e6390e8ec3e5988 2023-07-16 18:15:41,929 DEBUG [HFileArchiver-5] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/default/GrouptestMultiTableMoveB/57006fc6fe079fd17e6390e8ec3e5988/f, FileablePath, hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/default/GrouptestMultiTableMoveB/57006fc6fe079fd17e6390e8ec3e5988/recovered.edits] 2023-07-16 18:15:41,932 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(1230): Checking to see if procedure is done pid=113 2023-07-16 18:15:41,935 DEBUG [HFileArchiver-5] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/default/GrouptestMultiTableMoveB/57006fc6fe079fd17e6390e8ec3e5988/recovered.edits/7.seqid to hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/archive/data/default/GrouptestMultiTableMoveB/57006fc6fe079fd17e6390e8ec3e5988/recovered.edits/7.seqid 2023-07-16 18:15:41,935 DEBUG [HFileArchiver-5] backup.HFileArchiver(596): Deleted hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/default/GrouptestMultiTableMoveB/57006fc6fe079fd17e6390e8ec3e5988 2023-07-16 18:15:41,935 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived GrouptestMultiTableMoveB regions 2023-07-16 18:15:41,938 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=113, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-16 18:15:41,940 WARN [PEWorker-3] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of GrouptestMultiTableMoveB from hbase:meta 2023-07-16 18:15:41,942 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(421): Removing 'GrouptestMultiTableMoveB' descriptor. 2023-07-16 18:15:41,943 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=113, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-16 18:15:41,943 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(411): Removing 'GrouptestMultiTableMoveB' from region states. 2023-07-16 18:15:41,943 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveB,,1689531339432.57006fc6fe079fd17e6390e8ec3e5988.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689531341943"}]},"ts":"9223372036854775807"} 2023-07-16 18:15:41,945 INFO [PEWorker-3] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-16 18:15:41,945 DEBUG [PEWorker-3] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 57006fc6fe079fd17e6390e8ec3e5988, NAME => 'GrouptestMultiTableMoveB,,1689531339432.57006fc6fe079fd17e6390e8ec3e5988.', STARTKEY => '', ENDKEY => ''}] 2023-07-16 18:15:41,945 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(415): Marking 'GrouptestMultiTableMoveB' as deleted. 2023-07-16 18:15:41,945 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"GrouptestMultiTableMoveB","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689531341945"}]},"ts":"9223372036854775807"} 2023-07-16 18:15:41,946 INFO [PEWorker-3] hbase.MetaTableAccessor(1658): Deleted table GrouptestMultiTableMoveB state from META 2023-07-16 18:15:41,948 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(130): Finished pid=113, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=GrouptestMultiTableMoveB 2023-07-16 18:15:41,949 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=113, state=SUCCESS; DeleteTableProcedure table=GrouptestMultiTableMoveB in 29 msec 2023-07-16 18:15:42,033 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(1230): Checking to see if procedure is done pid=113 2023-07-16 18:15:42,033 INFO [Listener at localhost/38073] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:GrouptestMultiTableMoveB, procId: 113 completed 2023-07-16 18:15:42,037 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 18:15:42,037 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 18:15:42,038 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-16 18:15:42,038 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-16 18:15:42,038 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 18:15:42,039 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:33809] to rsgroup default 2023-07-16 18:15:42,041 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 18:15:42,042 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testMultiTableMove_704176438 2023-07-16 18:15:42,042 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 18:15:42,043 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-16 18:15:42,044 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testMultiTableMove_704176438, current retry=0 2023-07-16 18:15:42,045 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,33809,1689531323219] are moved back to Group_testMultiTableMove_704176438 2023-07-16 18:15:42,045 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminServer(438): Move servers done: Group_testMultiTableMove_704176438 => default 2023-07-16 18:15:42,045 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 18:15:42,046 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_testMultiTableMove_704176438 2023-07-16 18:15:42,050 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 18:15:42,050 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 18:15:42,051 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-16 18:15:42,053 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-16 18:15:42,054 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-16 18:15:42,054 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-16 18:15:42,054 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 18:15:42,056 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-16 18:15:42,056 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 18:15:42,058 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-16 18:15:42,065 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 18:15:42,067 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-16 18:15:42,070 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-16 18:15:42,077 INFO [Listener at localhost/38073] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-16 18:15:42,077 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-16 18:15:42,080 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 18:15:42,080 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 18:15:42,082 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-16 18:15:42,083 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-16 18:15:42,086 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 18:15:42,087 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 18:15:42,088 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:45445] to rsgroup master 2023-07-16 18:15:42,089 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45445] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45445 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 18:15:42,089 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45445] ipc.CallRunner(144): callId: 508 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:45244 deadline: 1689532542088, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45445 is either offline or it does not exist. 2023-07-16 18:15:42,089 WARN [Listener at localhost/38073] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45445 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45445 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-16 18:15:42,091 INFO [Listener at localhost/38073] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 18:15:42,091 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 18:15:42,092 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 18:15:42,092 INFO [Listener at localhost/38073] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33809, jenkins-hbase4.apache.org:41927, jenkins-hbase4.apache.org:43375, jenkins-hbase4.apache.org:44563], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-16 18:15:42,093 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-16 18:15:42,093 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 18:15:42,112 INFO [Listener at localhost/38073] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testMultiTableMove Thread=512 (was 514), OpenFileDescriptor=809 (was 825), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=401 (was 418), ProcessCount=173 (was 173), AvailableMemoryMB=2976 (was 3093) 2023-07-16 18:15:42,113 WARN [Listener at localhost/38073] hbase.ResourceChecker(130): Thread=512 is superior to 500 2023-07-16 18:15:42,130 INFO [Listener at localhost/38073] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testRenameRSGroupConstraints Thread=512, OpenFileDescriptor=809, MaxFileDescriptor=60000, SystemLoadAverage=401, ProcessCount=173, AvailableMemoryMB=2976 2023-07-16 18:15:42,131 WARN [Listener at localhost/38073] hbase.ResourceChecker(130): Thread=512 is superior to 500 2023-07-16 18:15:42,131 INFO [Listener at localhost/38073] rsgroup.TestRSGroupsBase(132): testRenameRSGroupConstraints 2023-07-16 18:15:42,135 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 18:15:42,135 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 18:15:42,136 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-16 18:15:42,136 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45445] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-16 18:15:42,136 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 18:15:42,136 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-16 18:15:42,136 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 18:15:42,137 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-16 18:15:42,141 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 18:15:42,141 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-16 18:15:42,143 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-16 18:15:42,146 INFO [Listener at localhost/38073] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-16 18:15:42,147 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-16 18:15:42,149 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 18:15:42,149 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 18:15:42,151 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-16 18:15:42,153 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-16 18:15:42,156 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 18:15:42,156 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 18:15:42,158 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:45445] to rsgroup master 2023-07-16 18:15:42,159 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45445] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45445 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 18:15:42,159 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45445] ipc.CallRunner(144): callId: 536 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:45244 deadline: 1689532542158, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45445 is either offline or it does not exist. 2023-07-16 18:15:42,159 WARN [Listener at localhost/38073] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45445 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45445 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-16 18:15:42,161 INFO [Listener at localhost/38073] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 18:15:42,162 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 18:15:42,162 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 18:15:42,162 INFO [Listener at localhost/38073] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33809, jenkins-hbase4.apache.org:41927, jenkins-hbase4.apache.org:43375, jenkins-hbase4.apache.org:44563], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-16 18:15:42,163 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-16 18:15:42,163 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 18:15:42,165 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-16 18:15:42,166 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 18:15:42,166 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup oldGroup 2023-07-16 18:15:42,168 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 18:15:42,169 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-16 18:15:42,171 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 18:15:42,171 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-16 18:15:42,178 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-16 18:15:42,181 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 18:15:42,181 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 18:15:42,184 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:33809, jenkins-hbase4.apache.org:41927] to rsgroup oldGroup 2023-07-16 18:15:42,186 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 18:15:42,186 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-16 18:15:42,187 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 18:15:42,187 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-16 18:15:42,189 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45445] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-16 18:15:42,189 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45445] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,33809,1689531323219, jenkins-hbase4.apache.org,41927,1689531323590] are moved back to default 2023-07-16 18:15:42,189 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45445] rsgroup.RSGroupAdminServer(438): Move servers done: default => oldGroup 2023-07-16 18:15:42,189 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 18:15:42,193 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 18:15:42,193 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 18:15:42,195 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=oldGroup 2023-07-16 18:15:42,195 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 18:15:42,196 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=oldGroup 2023-07-16 18:15:42,196 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 18:15:42,197 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-16 18:15:42,197 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 18:15:42,198 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup anotherRSGroup 2023-07-16 18:15:42,200 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 18:15:42,200 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/anotherRSGroup 2023-07-16 18:15:42,201 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-16 18:15:42,202 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 18:15:42,202 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-16 18:15:42,204 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-16 18:15:42,208 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 18:15:42,208 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 18:15:42,213 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:43375] to rsgroup anotherRSGroup 2023-07-16 18:15:42,217 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 18:15:42,217 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/anotherRSGroup 2023-07-16 18:15:42,218 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-16 18:15:42,218 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 18:15:42,219 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-16 18:15:42,221 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45445] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-16 18:15:42,221 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45445] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,43375,1689531323422] are moved back to default 2023-07-16 18:15:42,221 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45445] rsgroup.RSGroupAdminServer(438): Move servers done: default => anotherRSGroup 2023-07-16 18:15:42,221 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 18:15:42,225 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 18:15:42,226 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 18:15:42,230 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=anotherRSGroup 2023-07-16 18:15:42,230 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 18:15:42,231 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=anotherRSGroup 2023-07-16 18:15:42,231 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 18:15:42,238 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from nonExistingRSGroup to newRSGroup1 2023-07-16 18:15:42,238 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45445] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup nonExistingRSGroup does not exist at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:407) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 18:15:42,238 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45445] ipc.CallRunner(144): callId: 570 service: MasterService methodName: ExecMasterService size: 113 connection: 172.31.14.131:45244 deadline: 1689532542237, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup nonExistingRSGroup does not exist 2023-07-16 18:15:42,240 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from oldGroup to anotherRSGroup 2023-07-16 18:15:42,241 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45445] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: anotherRSGroup at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:410) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 18:15:42,241 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45445] ipc.CallRunner(144): callId: 572 service: MasterService methodName: ExecMasterService size: 106 connection: 172.31.14.131:45244 deadline: 1689532542240, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: anotherRSGroup 2023-07-16 18:15:42,242 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from default to newRSGroup2 2023-07-16 18:15:42,242 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45445] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Can't rename default rsgroup at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:403) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 18:15:42,242 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45445] ipc.CallRunner(144): callId: 574 service: MasterService methodName: ExecMasterService size: 102 connection: 172.31.14.131:45244 deadline: 1689532542242, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Can't rename default rsgroup 2023-07-16 18:15:42,243 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from oldGroup to default 2023-07-16 18:15:42,244 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45445] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: default at org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl.renameRSGroup(RSGroupInfoManagerImpl.java:410) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.renameRSGroup(RSGroupAdminServer.java:617) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.renameRSGroup(RSGroupAdminEndpoint.java:417) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16233) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 18:15:42,244 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45445] ipc.CallRunner(144): callId: 576 service: MasterService methodName: ExecMasterService size: 99 connection: 172.31.14.131:45244 deadline: 1689532542243, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Group already exists: default 2023-07-16 18:15:42,249 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 18:15:42,250 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 18:15:42,251 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-16 18:15:42,251 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45445] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-16 18:15:42,251 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 18:15:42,253 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:43375] to rsgroup default 2023-07-16 18:15:42,257 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 18:15:42,257 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/anotherRSGroup 2023-07-16 18:15:42,258 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-16 18:15:42,259 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 18:15:42,260 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-16 18:15:42,262 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45445] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group anotherRSGroup, current retry=0 2023-07-16 18:15:42,262 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45445] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,43375,1689531323422] are moved back to anotherRSGroup 2023-07-16 18:15:42,262 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45445] rsgroup.RSGroupAdminServer(438): Move servers done: anotherRSGroup => default 2023-07-16 18:15:42,262 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 18:15:42,264 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup anotherRSGroup 2023-07-16 18:15:42,269 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 18:15:42,270 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-16 18:15:42,276 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 18:15:42,277 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 7 2023-07-16 18:15:42,282 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-16 18:15:42,284 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-16 18:15:42,284 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45445] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-16 18:15:42,284 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 18:15:42,286 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:33809, jenkins-hbase4.apache.org:41927] to rsgroup default 2023-07-16 18:15:42,289 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 18:15:42,291 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldGroup 2023-07-16 18:15:42,292 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 18:15:42,292 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-16 18:15:42,295 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45445] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group oldGroup, current retry=0 2023-07-16 18:15:42,295 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45445] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,33809,1689531323219, jenkins-hbase4.apache.org,41927,1689531323590] are moved back to oldGroup 2023-07-16 18:15:42,295 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45445] rsgroup.RSGroupAdminServer(438): Move servers done: oldGroup => default 2023-07-16 18:15:42,295 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 18:15:42,297 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup oldGroup 2023-07-16 18:15:42,302 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 18:15:42,303 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 18:15:42,304 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-16 18:15:42,306 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-16 18:15:42,307 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-16 18:15:42,307 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45445] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-16 18:15:42,307 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 18:15:42,308 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-16 18:15:42,308 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 18:15:42,309 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-16 18:15:42,316 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 18:15:42,317 DEBUG [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-16 18:15:42,325 INFO [RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-16 18:15:42,330 INFO [Listener at localhost/38073] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-16 18:15:42,332 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-16 18:15:42,335 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 18:15:42,336 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 18:15:42,338 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-16 18:15:42,342 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-16 18:15:42,346 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 18:15:42,346 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 18:15:42,352 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:45445] to rsgroup master 2023-07-16 18:15:42,353 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45445 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 18:15:42,353 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] ipc.CallRunner(144): callId: 612 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:45244 deadline: 1689532542352, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45445 is either offline or it does not exist. 2023-07-16 18:15:42,353 WARN [Listener at localhost/38073] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45445 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor57.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45445 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-16 18:15:42,356 INFO [Listener at localhost/38073] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 18:15:42,358 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 18:15:42,358 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 18:15:42,358 INFO [Listener at localhost/38073] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33809, jenkins-hbase4.apache.org:41927, jenkins-hbase4.apache.org:43375, jenkins-hbase4.apache.org:44563], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-16 18:15:42,359 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-16 18:15:42,359 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 18:15:42,385 INFO [Listener at localhost/38073] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testRenameRSGroupConstraints Thread=516 (was 512) Potentially hanging thread: hconnection-0x67375dfc-shared-pool-17 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x67375dfc-shared-pool-16 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x67375dfc-shared-pool-19 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x67375dfc-shared-pool-18 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=809 (was 809), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=401 (was 401), ProcessCount=173 (was 173), AvailableMemoryMB=2991 (was 2976) - AvailableMemoryMB LEAK? - 2023-07-16 18:15:42,386 WARN [Listener at localhost/38073] hbase.ResourceChecker(130): Thread=516 is superior to 500 2023-07-16 18:15:42,413 INFO [Listener at localhost/38073] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testRenameRSGroup Thread=516, OpenFileDescriptor=809, MaxFileDescriptor=60000, SystemLoadAverage=401, ProcessCount=173, AvailableMemoryMB=2991 2023-07-16 18:15:42,413 WARN [Listener at localhost/38073] hbase.ResourceChecker(130): Thread=516 is superior to 500 2023-07-16 18:15:42,413 INFO [Listener at localhost/38073] rsgroup.TestRSGroupsBase(132): testRenameRSGroup 2023-07-16 18:15:42,418 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 18:15:42,418 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 18:15:42,419 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-16 18:15:42,419 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-16 18:15:42,419 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 18:15:42,420 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-16 18:15:42,420 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 18:15:42,421 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-16 18:15:42,424 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 18:15:42,425 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-16 18:15:42,426 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-16 18:15:42,429 INFO [Listener at localhost/38073] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-16 18:15:42,431 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-16 18:15:42,433 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 18:15:42,434 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 18:15:42,436 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-16 18:15:42,438 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-16 18:15:42,441 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 18:15:42,441 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 18:15:42,443 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:45445] to rsgroup master 2023-07-16 18:15:42,443 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45445 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 18:15:42,443 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] ipc.CallRunner(144): callId: 640 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:45244 deadline: 1689532542443, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45445 is either offline or it does not exist. 2023-07-16 18:15:42,444 WARN [Listener at localhost/38073] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45445 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor57.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45445 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-16 18:15:42,445 INFO [Listener at localhost/38073] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 18:15:42,446 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 18:15:42,446 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 18:15:42,446 INFO [Listener at localhost/38073] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33809, jenkins-hbase4.apache.org:41927, jenkins-hbase4.apache.org:43375, jenkins-hbase4.apache.org:44563], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-16 18:15:42,447 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-16 18:15:42,447 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 18:15:42,447 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-16 18:15:42,448 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 18:15:42,449 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup oldgroup 2023-07-16 18:15:42,451 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-16 18:15:42,453 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 18:15:42,453 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 18:15:42,453 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-16 18:15:42,457 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-16 18:15:42,460 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 18:15:42,460 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 18:15:42,462 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:33809, jenkins-hbase4.apache.org:41927] to rsgroup oldgroup 2023-07-16 18:15:42,464 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-16 18:15:42,464 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 18:15:42,464 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 18:15:42,464 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-16 18:15:42,467 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-16 18:15:42,467 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,33809,1689531323219, jenkins-hbase4.apache.org,41927,1689531323590] are moved back to default 2023-07-16 18:15:42,467 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminServer(438): Move servers done: default => oldgroup 2023-07-16 18:15:42,467 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 18:15:42,469 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 18:15:42,469 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 18:15:42,471 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=oldgroup 2023-07-16 18:15:42,471 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 18:15:42,473 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'testRename', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'tr', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-16 18:15:42,474 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] procedure2.ProcedureExecutor(1029): Stored pid=114, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=testRename 2023-07-16 18:15:42,476 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=114, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_PRE_OPERATION 2023-07-16 18:15:42,476 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "testRename" procId is: 114 2023-07-16 18:15:42,477 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(1230): Checking to see if procedure is done pid=114 2023-07-16 18:15:42,477 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-16 18:15:42,478 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 18:15:42,478 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 18:15:42,479 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-16 18:15:42,481 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=114, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-16 18:15:42,483 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/default/testRename/169e52c14d3c900a0913d50e0cfad311 2023-07-16 18:15:42,484 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/default/testRename/169e52c14d3c900a0913d50e0cfad311 empty. 2023-07-16 18:15:42,484 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/default/testRename/169e52c14d3c900a0913d50e0cfad311 2023-07-16 18:15:42,484 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived testRename regions 2023-07-16 18:15:42,500 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/default/testRename/.tabledesc/.tableinfo.0000000001 2023-07-16 18:15:42,502 INFO [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(7675): creating {ENCODED => 169e52c14d3c900a0913d50e0cfad311, NAME => 'testRename,,1689531342473.169e52c14d3c900a0913d50e0cfad311.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='testRename', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'tr', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp 2023-07-16 18:15:42,515 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(866): Instantiated testRename,,1689531342473.169e52c14d3c900a0913d50e0cfad311.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 18:15:42,516 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1604): Closing 169e52c14d3c900a0913d50e0cfad311, disabling compactions & flushes 2023-07-16 18:15:42,516 INFO [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1626): Closing region testRename,,1689531342473.169e52c14d3c900a0913d50e0cfad311. 2023-07-16 18:15:42,516 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1689531342473.169e52c14d3c900a0913d50e0cfad311. 2023-07-16 18:15:42,516 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1689531342473.169e52c14d3c900a0913d50e0cfad311. after waiting 0 ms 2023-07-16 18:15:42,516 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1689531342473.169e52c14d3c900a0913d50e0cfad311. 2023-07-16 18:15:42,516 INFO [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1838): Closed testRename,,1689531342473.169e52c14d3c900a0913d50e0cfad311. 2023-07-16 18:15:42,516 DEBUG [RegionOpenAndInit-testRename-pool-0] regionserver.HRegion(1558): Region close journal for 169e52c14d3c900a0913d50e0cfad311: 2023-07-16 18:15:42,518 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=114, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_ADD_TO_META 2023-07-16 18:15:42,519 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"testRename,,1689531342473.169e52c14d3c900a0913d50e0cfad311.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689531342519"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689531342519"}]},"ts":"1689531342519"} 2023-07-16 18:15:42,521 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-16 18:15:42,521 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=114, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-16 18:15:42,521 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"testRename","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689531342521"}]},"ts":"1689531342521"} 2023-07-16 18:15:42,522 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=testRename, state=ENABLING in hbase:meta 2023-07-16 18:15:42,525 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-16 18:15:42,525 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-16 18:15:42,525 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-16 18:15:42,525 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-16 18:15:42,526 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=115, ppid=114, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=testRename, region=169e52c14d3c900a0913d50e0cfad311, ASSIGN}] 2023-07-16 18:15:42,527 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=115, ppid=114, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=testRename, region=169e52c14d3c900a0913d50e0cfad311, ASSIGN 2023-07-16 18:15:42,528 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=115, ppid=114, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=testRename, region=169e52c14d3c900a0913d50e0cfad311, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,44563,1689531327107; forceNewPlan=false, retain=false 2023-07-16 18:15:42,578 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(1230): Checking to see if procedure is done pid=114 2023-07-16 18:15:42,678 INFO [jenkins-hbase4:45445] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-16 18:15:42,680 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=115 updating hbase:meta row=169e52c14d3c900a0913d50e0cfad311, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,44563,1689531327107 2023-07-16 18:15:42,680 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689531342473.169e52c14d3c900a0913d50e0cfad311.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689531342680"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689531342680"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689531342680"}]},"ts":"1689531342680"} 2023-07-16 18:15:42,682 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=116, ppid=115, state=RUNNABLE; OpenRegionProcedure 169e52c14d3c900a0913d50e0cfad311, server=jenkins-hbase4.apache.org,44563,1689531327107}] 2023-07-16 18:15:42,779 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(1230): Checking to see if procedure is done pid=114 2023-07-16 18:15:42,838 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open testRename,,1689531342473.169e52c14d3c900a0913d50e0cfad311. 2023-07-16 18:15:42,838 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 169e52c14d3c900a0913d50e0cfad311, NAME => 'testRename,,1689531342473.169e52c14d3c900a0913d50e0cfad311.', STARTKEY => '', ENDKEY => ''} 2023-07-16 18:15:42,839 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table testRename 169e52c14d3c900a0913d50e0cfad311 2023-07-16 18:15:42,839 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated testRename,,1689531342473.169e52c14d3c900a0913d50e0cfad311.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 18:15:42,839 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 169e52c14d3c900a0913d50e0cfad311 2023-07-16 18:15:42,839 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 169e52c14d3c900a0913d50e0cfad311 2023-07-16 18:15:42,840 INFO [StoreOpener-169e52c14d3c900a0913d50e0cfad311-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family tr of region 169e52c14d3c900a0913d50e0cfad311 2023-07-16 18:15:42,842 DEBUG [StoreOpener-169e52c14d3c900a0913d50e0cfad311-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/testRename/169e52c14d3c900a0913d50e0cfad311/tr 2023-07-16 18:15:42,842 DEBUG [StoreOpener-169e52c14d3c900a0913d50e0cfad311-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/testRename/169e52c14d3c900a0913d50e0cfad311/tr 2023-07-16 18:15:42,842 INFO [StoreOpener-169e52c14d3c900a0913d50e0cfad311-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 169e52c14d3c900a0913d50e0cfad311 columnFamilyName tr 2023-07-16 18:15:42,843 INFO [StoreOpener-169e52c14d3c900a0913d50e0cfad311-1] regionserver.HStore(310): Store=169e52c14d3c900a0913d50e0cfad311/tr, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 18:15:42,844 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/testRename/169e52c14d3c900a0913d50e0cfad311 2023-07-16 18:15:42,844 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/testRename/169e52c14d3c900a0913d50e0cfad311 2023-07-16 18:15:42,847 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 169e52c14d3c900a0913d50e0cfad311 2023-07-16 18:15:42,849 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/testRename/169e52c14d3c900a0913d50e0cfad311/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-16 18:15:42,850 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 169e52c14d3c900a0913d50e0cfad311; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10368245120, jitterRate=-0.0343819260597229}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 18:15:42,850 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 169e52c14d3c900a0913d50e0cfad311: 2023-07-16 18:15:42,851 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for testRename,,1689531342473.169e52c14d3c900a0913d50e0cfad311., pid=116, masterSystemTime=1689531342834 2023-07-16 18:15:42,853 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for testRename,,1689531342473.169e52c14d3c900a0913d50e0cfad311. 2023-07-16 18:15:42,853 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened testRename,,1689531342473.169e52c14d3c900a0913d50e0cfad311. 2023-07-16 18:15:42,853 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=115 updating hbase:meta row=169e52c14d3c900a0913d50e0cfad311, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,44563,1689531327107 2023-07-16 18:15:42,853 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"testRename,,1689531342473.169e52c14d3c900a0913d50e0cfad311.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689531342853"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689531342853"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689531342853"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689531342853"}]},"ts":"1689531342853"} 2023-07-16 18:15:42,856 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=116, resume processing ppid=115 2023-07-16 18:15:42,856 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=116, ppid=115, state=SUCCESS; OpenRegionProcedure 169e52c14d3c900a0913d50e0cfad311, server=jenkins-hbase4.apache.org,44563,1689531327107 in 173 msec 2023-07-16 18:15:42,858 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=115, resume processing ppid=114 2023-07-16 18:15:42,858 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=115, ppid=114, state=SUCCESS; TransitRegionStateProcedure table=testRename, region=169e52c14d3c900a0913d50e0cfad311, ASSIGN in 330 msec 2023-07-16 18:15:42,859 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=114, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-16 18:15:42,859 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"testRename","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689531342859"}]},"ts":"1689531342859"} 2023-07-16 18:15:42,860 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=testRename, state=ENABLED in hbase:meta 2023-07-16 18:15:42,863 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=114, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=testRename execute state=CREATE_TABLE_POST_OPERATION 2023-07-16 18:15:42,865 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=114, state=SUCCESS; CreateTableProcedure table=testRename in 390 msec 2023-07-16 18:15:43,080 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(1230): Checking to see if procedure is done pid=114 2023-07-16 18:15:43,080 INFO [Listener at localhost/38073] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:testRename, procId: 114 completed 2023-07-16 18:15:43,081 DEBUG [Listener at localhost/38073] hbase.HBaseTestingUtility(3430): Waiting until all regions of table testRename get assigned. Timeout = 60000ms 2023-07-16 18:15:43,081 INFO [Listener at localhost/38073] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 18:15:43,084 INFO [Listener at localhost/38073] hbase.HBaseTestingUtility(3484): All regions for table testRename assigned to meta. Checking AM states. 2023-07-16 18:15:43,085 INFO [Listener at localhost/38073] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 18:15:43,085 INFO [Listener at localhost/38073] hbase.HBaseTestingUtility(3504): All regions for table testRename assigned. 2023-07-16 18:15:43,087 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [testRename] to rsgroup oldgroup 2023-07-16 18:15:43,089 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-16 18:15:43,090 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 18:15:43,091 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 18:15:43,092 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-16 18:15:43,094 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminServer(339): Moving region(s) for table testRename to RSGroup oldgroup 2023-07-16 18:15:43,095 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminServer(345): Moving region 169e52c14d3c900a0913d50e0cfad311 to RSGroup oldgroup 2023-07-16 18:15:43,095 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-16 18:15:43,095 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-16 18:15:43,095 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-16 18:15:43,095 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-16 18:15:43,095 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-16 18:15:43,096 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] procedure2.ProcedureExecutor(1029): Stored pid=117, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=169e52c14d3c900a0913d50e0cfad311, REOPEN/MOVE 2023-07-16 18:15:43,096 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group oldgroup, current retry=0 2023-07-16 18:15:43,097 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=117, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=169e52c14d3c900a0913d50e0cfad311, REOPEN/MOVE 2023-07-16 18:15:43,098 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=117 updating hbase:meta row=169e52c14d3c900a0913d50e0cfad311, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,44563,1689531327107 2023-07-16 18:15:43,098 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689531342473.169e52c14d3c900a0913d50e0cfad311.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689531343098"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689531343098"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689531343098"}]},"ts":"1689531343098"} 2023-07-16 18:15:43,100 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=118, ppid=117, state=RUNNABLE; CloseRegionProcedure 169e52c14d3c900a0913d50e0cfad311, server=jenkins-hbase4.apache.org,44563,1689531327107}] 2023-07-16 18:15:43,254 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 169e52c14d3c900a0913d50e0cfad311 2023-07-16 18:15:43,255 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 169e52c14d3c900a0913d50e0cfad311, disabling compactions & flushes 2023-07-16 18:15:43,256 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region testRename,,1689531342473.169e52c14d3c900a0913d50e0cfad311. 2023-07-16 18:15:43,256 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1689531342473.169e52c14d3c900a0913d50e0cfad311. 2023-07-16 18:15:43,256 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1689531342473.169e52c14d3c900a0913d50e0cfad311. after waiting 0 ms 2023-07-16 18:15:43,256 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1689531342473.169e52c14d3c900a0913d50e0cfad311. 2023-07-16 18:15:43,263 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/testRename/169e52c14d3c900a0913d50e0cfad311/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-16 18:15:43,263 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed testRename,,1689531342473.169e52c14d3c900a0913d50e0cfad311. 2023-07-16 18:15:43,263 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 169e52c14d3c900a0913d50e0cfad311: 2023-07-16 18:15:43,263 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 169e52c14d3c900a0913d50e0cfad311 move to jenkins-hbase4.apache.org,33809,1689531323219 record at close sequenceid=2 2023-07-16 18:15:43,266 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 169e52c14d3c900a0913d50e0cfad311 2023-07-16 18:15:43,266 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=117 updating hbase:meta row=169e52c14d3c900a0913d50e0cfad311, regionState=CLOSED 2023-07-16 18:15:43,266 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"testRename,,1689531342473.169e52c14d3c900a0913d50e0cfad311.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689531343266"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689531343266"}]},"ts":"1689531343266"} 2023-07-16 18:15:43,270 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=118, resume processing ppid=117 2023-07-16 18:15:43,270 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=118, ppid=117, state=SUCCESS; CloseRegionProcedure 169e52c14d3c900a0913d50e0cfad311, server=jenkins-hbase4.apache.org,44563,1689531327107 in 167 msec 2023-07-16 18:15:43,271 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=117, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=testRename, region=169e52c14d3c900a0913d50e0cfad311, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,33809,1689531323219; forceNewPlan=false, retain=false 2023-07-16 18:15:43,421 INFO [jenkins-hbase4:45445] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-16 18:15:43,421 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=117 updating hbase:meta row=169e52c14d3c900a0913d50e0cfad311, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,33809,1689531323219 2023-07-16 18:15:43,422 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689531342473.169e52c14d3c900a0913d50e0cfad311.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689531343421"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689531343421"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689531343421"}]},"ts":"1689531343421"} 2023-07-16 18:15:43,424 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=119, ppid=117, state=RUNNABLE; OpenRegionProcedure 169e52c14d3c900a0913d50e0cfad311, server=jenkins-hbase4.apache.org,33809,1689531323219}] 2023-07-16 18:15:43,579 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open testRename,,1689531342473.169e52c14d3c900a0913d50e0cfad311. 2023-07-16 18:15:43,579 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 169e52c14d3c900a0913d50e0cfad311, NAME => 'testRename,,1689531342473.169e52c14d3c900a0913d50e0cfad311.', STARTKEY => '', ENDKEY => ''} 2023-07-16 18:15:43,579 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table testRename 169e52c14d3c900a0913d50e0cfad311 2023-07-16 18:15:43,579 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated testRename,,1689531342473.169e52c14d3c900a0913d50e0cfad311.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 18:15:43,579 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 169e52c14d3c900a0913d50e0cfad311 2023-07-16 18:15:43,579 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 169e52c14d3c900a0913d50e0cfad311 2023-07-16 18:15:43,581 INFO [StoreOpener-169e52c14d3c900a0913d50e0cfad311-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family tr of region 169e52c14d3c900a0913d50e0cfad311 2023-07-16 18:15:43,582 DEBUG [StoreOpener-169e52c14d3c900a0913d50e0cfad311-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/testRename/169e52c14d3c900a0913d50e0cfad311/tr 2023-07-16 18:15:43,582 DEBUG [StoreOpener-169e52c14d3c900a0913d50e0cfad311-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/testRename/169e52c14d3c900a0913d50e0cfad311/tr 2023-07-16 18:15:43,583 INFO [StoreOpener-169e52c14d3c900a0913d50e0cfad311-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 169e52c14d3c900a0913d50e0cfad311 columnFamilyName tr 2023-07-16 18:15:43,583 INFO [StoreOpener-169e52c14d3c900a0913d50e0cfad311-1] regionserver.HStore(310): Store=169e52c14d3c900a0913d50e0cfad311/tr, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 18:15:43,584 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/testRename/169e52c14d3c900a0913d50e0cfad311 2023-07-16 18:15:43,585 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/testRename/169e52c14d3c900a0913d50e0cfad311 2023-07-16 18:15:43,588 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 169e52c14d3c900a0913d50e0cfad311 2023-07-16 18:15:43,589 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 169e52c14d3c900a0913d50e0cfad311; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11995372320, jitterRate=0.11715610325336456}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 18:15:43,589 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 169e52c14d3c900a0913d50e0cfad311: 2023-07-16 18:15:43,590 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for testRename,,1689531342473.169e52c14d3c900a0913d50e0cfad311., pid=119, masterSystemTime=1689531343576 2023-07-16 18:15:43,591 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for testRename,,1689531342473.169e52c14d3c900a0913d50e0cfad311. 2023-07-16 18:15:43,591 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened testRename,,1689531342473.169e52c14d3c900a0913d50e0cfad311. 2023-07-16 18:15:43,592 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=117 updating hbase:meta row=169e52c14d3c900a0913d50e0cfad311, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,33809,1689531323219 2023-07-16 18:15:43,592 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"testRename,,1689531342473.169e52c14d3c900a0913d50e0cfad311.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689531343591"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689531343591"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689531343591"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689531343591"}]},"ts":"1689531343591"} 2023-07-16 18:15:43,594 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=119, resume processing ppid=117 2023-07-16 18:15:43,595 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=119, ppid=117, state=SUCCESS; OpenRegionProcedure 169e52c14d3c900a0913d50e0cfad311, server=jenkins-hbase4.apache.org,33809,1689531323219 in 169 msec 2023-07-16 18:15:43,596 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=117, state=SUCCESS; TransitRegionStateProcedure table=testRename, region=169e52c14d3c900a0913d50e0cfad311, REOPEN/MOVE in 500 msec 2023-07-16 18:15:44,097 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] procedure.ProcedureSyncWait(216): waitFor pid=117 2023-07-16 18:15:44,097 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminServer(369): All regions from table(s) [testRename] moved to target group oldgroup. 2023-07-16 18:15:44,097 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 18:15:44,100 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 18:15:44,100 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 18:15:44,103 INFO [Listener at localhost/38073] hbase.Waiter(180): Waiting up to [1,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 18:15:44,103 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=testRename 2023-07-16 18:15:44,104 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-16 18:15:44,105 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=oldgroup 2023-07-16 18:15:44,105 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 18:15:44,106 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=testRename 2023-07-16 18:15:44,106 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-16 18:15:44,107 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-16 18:15:44,107 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 18:15:44,108 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup normal 2023-07-16 18:15:44,110 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-16 18:15:44,111 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-16 18:15:44,112 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 18:15:44,113 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 18:15:44,113 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-16 18:15:44,114 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-16 18:15:44,117 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 18:15:44,117 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 18:15:44,119 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:43375] to rsgroup normal 2023-07-16 18:15:44,121 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-16 18:15:44,121 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-16 18:15:44,122 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 18:15:44,122 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 18:15:44,122 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-16 18:15:44,130 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-16 18:15:44,130 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,43375,1689531323422] are moved back to default 2023-07-16 18:15:44,130 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminServer(438): Move servers done: default => normal 2023-07-16 18:15:44,130 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 18:15:44,132 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 18:15:44,132 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 18:15:44,134 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=normal 2023-07-16 18:15:44,135 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 18:15:44,136 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'unmovedTable', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'ut', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-16 18:15:44,137 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] procedure2.ProcedureExecutor(1029): Stored pid=120, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=unmovedTable 2023-07-16 18:15:44,139 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=120, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_PRE_OPERATION 2023-07-16 18:15:44,139 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "unmovedTable" procId is: 120 2023-07-16 18:15:44,140 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(1230): Checking to see if procedure is done pid=120 2023-07-16 18:15:44,141 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-16 18:15:44,141 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-16 18:15:44,141 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 18:15:44,142 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 18:15:44,142 DEBUG [PEWorker-4] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-16 18:15:44,145 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=120, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-16 18:15:44,146 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/default/unmovedTable/00c0b9125cfd04be97eeb4893c8c1908 2023-07-16 18:15:44,146 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/default/unmovedTable/00c0b9125cfd04be97eeb4893c8c1908 empty. 2023-07-16 18:15:44,147 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/default/unmovedTable/00c0b9125cfd04be97eeb4893c8c1908 2023-07-16 18:15:44,147 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived unmovedTable regions 2023-07-16 18:15:44,160 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/default/unmovedTable/.tabledesc/.tableinfo.0000000001 2023-07-16 18:15:44,161 INFO [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(7675): creating {ENCODED => 00c0b9125cfd04be97eeb4893c8c1908, NAME => 'unmovedTable,,1689531344136.00c0b9125cfd04be97eeb4893c8c1908.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='unmovedTable', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'ut', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp 2023-07-16 18:15:44,174 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(866): Instantiated unmovedTable,,1689531344136.00c0b9125cfd04be97eeb4893c8c1908.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 18:15:44,174 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1604): Closing 00c0b9125cfd04be97eeb4893c8c1908, disabling compactions & flushes 2023-07-16 18:15:44,174 INFO [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1626): Closing region unmovedTable,,1689531344136.00c0b9125cfd04be97eeb4893c8c1908. 2023-07-16 18:15:44,174 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1689531344136.00c0b9125cfd04be97eeb4893c8c1908. 2023-07-16 18:15:44,174 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1689531344136.00c0b9125cfd04be97eeb4893c8c1908. after waiting 0 ms 2023-07-16 18:15:44,174 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1689531344136.00c0b9125cfd04be97eeb4893c8c1908. 2023-07-16 18:15:44,174 INFO [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1838): Closed unmovedTable,,1689531344136.00c0b9125cfd04be97eeb4893c8c1908. 2023-07-16 18:15:44,174 DEBUG [RegionOpenAndInit-unmovedTable-pool-0] regionserver.HRegion(1558): Region close journal for 00c0b9125cfd04be97eeb4893c8c1908: 2023-07-16 18:15:44,176 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=120, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_ADD_TO_META 2023-07-16 18:15:44,177 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"unmovedTable,,1689531344136.00c0b9125cfd04be97eeb4893c8c1908.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689531344177"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689531344177"}]},"ts":"1689531344177"} 2023-07-16 18:15:44,178 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-16 18:15:44,179 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=120, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-16 18:15:44,179 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"unmovedTable","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689531344179"}]},"ts":"1689531344179"} 2023-07-16 18:15:44,180 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=unmovedTable, state=ENABLING in hbase:meta 2023-07-16 18:15:44,184 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=121, ppid=120, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=unmovedTable, region=00c0b9125cfd04be97eeb4893c8c1908, ASSIGN}] 2023-07-16 18:15:44,185 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=121, ppid=120, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=unmovedTable, region=00c0b9125cfd04be97eeb4893c8c1908, ASSIGN 2023-07-16 18:15:44,186 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=121, ppid=120, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=unmovedTable, region=00c0b9125cfd04be97eeb4893c8c1908, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,44563,1689531327107; forceNewPlan=false, retain=false 2023-07-16 18:15:44,241 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(1230): Checking to see if procedure is done pid=120 2023-07-16 18:15:44,337 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=121 updating hbase:meta row=00c0b9125cfd04be97eeb4893c8c1908, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,44563,1689531327107 2023-07-16 18:15:44,338 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689531344136.00c0b9125cfd04be97eeb4893c8c1908.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689531344337"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689531344337"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689531344337"}]},"ts":"1689531344337"} 2023-07-16 18:15:44,340 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=122, ppid=121, state=RUNNABLE; OpenRegionProcedure 00c0b9125cfd04be97eeb4893c8c1908, server=jenkins-hbase4.apache.org,44563,1689531327107}] 2023-07-16 18:15:44,442 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(1230): Checking to see if procedure is done pid=120 2023-07-16 18:15:44,495 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open unmovedTable,,1689531344136.00c0b9125cfd04be97eeb4893c8c1908. 2023-07-16 18:15:44,495 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 00c0b9125cfd04be97eeb4893c8c1908, NAME => 'unmovedTable,,1689531344136.00c0b9125cfd04be97eeb4893c8c1908.', STARTKEY => '', ENDKEY => ''} 2023-07-16 18:15:44,496 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table unmovedTable 00c0b9125cfd04be97eeb4893c8c1908 2023-07-16 18:15:44,496 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated unmovedTable,,1689531344136.00c0b9125cfd04be97eeb4893c8c1908.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 18:15:44,496 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 00c0b9125cfd04be97eeb4893c8c1908 2023-07-16 18:15:44,496 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 00c0b9125cfd04be97eeb4893c8c1908 2023-07-16 18:15:44,497 INFO [StoreOpener-00c0b9125cfd04be97eeb4893c8c1908-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family ut of region 00c0b9125cfd04be97eeb4893c8c1908 2023-07-16 18:15:44,498 DEBUG [StoreOpener-00c0b9125cfd04be97eeb4893c8c1908-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/unmovedTable/00c0b9125cfd04be97eeb4893c8c1908/ut 2023-07-16 18:15:44,498 DEBUG [StoreOpener-00c0b9125cfd04be97eeb4893c8c1908-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/unmovedTable/00c0b9125cfd04be97eeb4893c8c1908/ut 2023-07-16 18:15:44,499 INFO [StoreOpener-00c0b9125cfd04be97eeb4893c8c1908-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 00c0b9125cfd04be97eeb4893c8c1908 columnFamilyName ut 2023-07-16 18:15:44,499 INFO [StoreOpener-00c0b9125cfd04be97eeb4893c8c1908-1] regionserver.HStore(310): Store=00c0b9125cfd04be97eeb4893c8c1908/ut, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 18:15:44,500 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/unmovedTable/00c0b9125cfd04be97eeb4893c8c1908 2023-07-16 18:15:44,501 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/unmovedTable/00c0b9125cfd04be97eeb4893c8c1908 2023-07-16 18:15:44,503 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 00c0b9125cfd04be97eeb4893c8c1908 2023-07-16 18:15:44,505 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/unmovedTable/00c0b9125cfd04be97eeb4893c8c1908/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-16 18:15:44,506 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 00c0b9125cfd04be97eeb4893c8c1908; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11524845920, jitterRate=0.07333491742610931}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 18:15:44,506 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 00c0b9125cfd04be97eeb4893c8c1908: 2023-07-16 18:15:44,506 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for unmovedTable,,1689531344136.00c0b9125cfd04be97eeb4893c8c1908., pid=122, masterSystemTime=1689531344491 2023-07-16 18:15:44,508 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for unmovedTable,,1689531344136.00c0b9125cfd04be97eeb4893c8c1908. 2023-07-16 18:15:44,508 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened unmovedTable,,1689531344136.00c0b9125cfd04be97eeb4893c8c1908. 2023-07-16 18:15:44,508 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=121 updating hbase:meta row=00c0b9125cfd04be97eeb4893c8c1908, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,44563,1689531327107 2023-07-16 18:15:44,508 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"unmovedTable,,1689531344136.00c0b9125cfd04be97eeb4893c8c1908.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689531344508"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689531344508"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689531344508"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689531344508"}]},"ts":"1689531344508"} 2023-07-16 18:15:44,511 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=122, resume processing ppid=121 2023-07-16 18:15:44,511 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=122, ppid=121, state=SUCCESS; OpenRegionProcedure 00c0b9125cfd04be97eeb4893c8c1908, server=jenkins-hbase4.apache.org,44563,1689531327107 in 171 msec 2023-07-16 18:15:44,513 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=121, resume processing ppid=120 2023-07-16 18:15:44,513 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=121, ppid=120, state=SUCCESS; TransitRegionStateProcedure table=unmovedTable, region=00c0b9125cfd04be97eeb4893c8c1908, ASSIGN in 327 msec 2023-07-16 18:15:44,513 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=120, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-16 18:15:44,513 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"unmovedTable","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689531344513"}]},"ts":"1689531344513"} 2023-07-16 18:15:44,514 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=unmovedTable, state=ENABLED in hbase:meta 2023-07-16 18:15:44,518 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=120, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=unmovedTable execute state=CREATE_TABLE_POST_OPERATION 2023-07-16 18:15:44,519 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=120, state=SUCCESS; CreateTableProcedure table=unmovedTable in 382 msec 2023-07-16 18:15:44,743 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(1230): Checking to see if procedure is done pid=120 2023-07-16 18:15:44,743 INFO [Listener at localhost/38073] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:unmovedTable, procId: 120 completed 2023-07-16 18:15:44,744 DEBUG [Listener at localhost/38073] hbase.HBaseTestingUtility(3430): Waiting until all regions of table unmovedTable get assigned. Timeout = 60000ms 2023-07-16 18:15:44,744 INFO [Listener at localhost/38073] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 18:15:44,747 INFO [Listener at localhost/38073] hbase.HBaseTestingUtility(3484): All regions for table unmovedTable assigned to meta. Checking AM states. 2023-07-16 18:15:44,748 INFO [Listener at localhost/38073] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 18:15:44,748 INFO [Listener at localhost/38073] hbase.HBaseTestingUtility(3504): All regions for table unmovedTable assigned. 2023-07-16 18:15:44,749 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [unmovedTable] to rsgroup normal 2023-07-16 18:15:44,752 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/oldgroup 2023-07-16 18:15:44,752 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-16 18:15:44,752 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 18:15:44,752 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 18:15:44,753 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-16 18:15:44,755 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminServer(339): Moving region(s) for table unmovedTable to RSGroup normal 2023-07-16 18:15:44,755 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminServer(345): Moving region 00c0b9125cfd04be97eeb4893c8c1908 to RSGroup normal 2023-07-16 18:15:44,755 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] procedure2.ProcedureExecutor(1029): Stored pid=123, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=00c0b9125cfd04be97eeb4893c8c1908, REOPEN/MOVE 2023-07-16 18:15:44,756 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group normal, current retry=0 2023-07-16 18:15:44,756 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=123, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=00c0b9125cfd04be97eeb4893c8c1908, REOPEN/MOVE 2023-07-16 18:15:44,756 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=123 updating hbase:meta row=00c0b9125cfd04be97eeb4893c8c1908, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,44563,1689531327107 2023-07-16 18:15:44,756 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689531344136.00c0b9125cfd04be97eeb4893c8c1908.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689531344756"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689531344756"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689531344756"}]},"ts":"1689531344756"} 2023-07-16 18:15:44,757 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=124, ppid=123, state=RUNNABLE; CloseRegionProcedure 00c0b9125cfd04be97eeb4893c8c1908, server=jenkins-hbase4.apache.org,44563,1689531327107}] 2023-07-16 18:15:44,910 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 00c0b9125cfd04be97eeb4893c8c1908 2023-07-16 18:15:44,912 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 00c0b9125cfd04be97eeb4893c8c1908, disabling compactions & flushes 2023-07-16 18:15:44,912 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region unmovedTable,,1689531344136.00c0b9125cfd04be97eeb4893c8c1908. 2023-07-16 18:15:44,912 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1689531344136.00c0b9125cfd04be97eeb4893c8c1908. 2023-07-16 18:15:44,912 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1689531344136.00c0b9125cfd04be97eeb4893c8c1908. after waiting 0 ms 2023-07-16 18:15:44,912 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1689531344136.00c0b9125cfd04be97eeb4893c8c1908. 2023-07-16 18:15:44,916 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/unmovedTable/00c0b9125cfd04be97eeb4893c8c1908/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-16 18:15:44,917 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed unmovedTable,,1689531344136.00c0b9125cfd04be97eeb4893c8c1908. 2023-07-16 18:15:44,917 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 00c0b9125cfd04be97eeb4893c8c1908: 2023-07-16 18:15:44,917 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 00c0b9125cfd04be97eeb4893c8c1908 move to jenkins-hbase4.apache.org,43375,1689531323422 record at close sequenceid=2 2023-07-16 18:15:44,919 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 00c0b9125cfd04be97eeb4893c8c1908 2023-07-16 18:15:44,919 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=123 updating hbase:meta row=00c0b9125cfd04be97eeb4893c8c1908, regionState=CLOSED 2023-07-16 18:15:44,919 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"unmovedTable,,1689531344136.00c0b9125cfd04be97eeb4893c8c1908.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689531344919"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689531344919"}]},"ts":"1689531344919"} 2023-07-16 18:15:44,922 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=124, resume processing ppid=123 2023-07-16 18:15:44,922 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=124, ppid=123, state=SUCCESS; CloseRegionProcedure 00c0b9125cfd04be97eeb4893c8c1908, server=jenkins-hbase4.apache.org,44563,1689531327107 in 164 msec 2023-07-16 18:15:44,923 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=123, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=unmovedTable, region=00c0b9125cfd04be97eeb4893c8c1908, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,43375,1689531323422; forceNewPlan=false, retain=false 2023-07-16 18:15:45,073 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=123 updating hbase:meta row=00c0b9125cfd04be97eeb4893c8c1908, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43375,1689531323422 2023-07-16 18:15:45,073 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689531344136.00c0b9125cfd04be97eeb4893c8c1908.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689531345073"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689531345073"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689531345073"}]},"ts":"1689531345073"} 2023-07-16 18:15:45,075 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=125, ppid=123, state=RUNNABLE; OpenRegionProcedure 00c0b9125cfd04be97eeb4893c8c1908, server=jenkins-hbase4.apache.org,43375,1689531323422}] 2023-07-16 18:15:45,235 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open unmovedTable,,1689531344136.00c0b9125cfd04be97eeb4893c8c1908. 2023-07-16 18:15:45,235 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 00c0b9125cfd04be97eeb4893c8c1908, NAME => 'unmovedTable,,1689531344136.00c0b9125cfd04be97eeb4893c8c1908.', STARTKEY => '', ENDKEY => ''} 2023-07-16 18:15:45,235 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table unmovedTable 00c0b9125cfd04be97eeb4893c8c1908 2023-07-16 18:15:45,235 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated unmovedTable,,1689531344136.00c0b9125cfd04be97eeb4893c8c1908.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 18:15:45,235 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 00c0b9125cfd04be97eeb4893c8c1908 2023-07-16 18:15:45,235 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 00c0b9125cfd04be97eeb4893c8c1908 2023-07-16 18:15:45,236 INFO [StoreOpener-00c0b9125cfd04be97eeb4893c8c1908-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family ut of region 00c0b9125cfd04be97eeb4893c8c1908 2023-07-16 18:15:45,238 DEBUG [StoreOpener-00c0b9125cfd04be97eeb4893c8c1908-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/unmovedTable/00c0b9125cfd04be97eeb4893c8c1908/ut 2023-07-16 18:15:45,238 DEBUG [StoreOpener-00c0b9125cfd04be97eeb4893c8c1908-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/unmovedTable/00c0b9125cfd04be97eeb4893c8c1908/ut 2023-07-16 18:15:45,238 INFO [StoreOpener-00c0b9125cfd04be97eeb4893c8c1908-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 00c0b9125cfd04be97eeb4893c8c1908 columnFamilyName ut 2023-07-16 18:15:45,239 INFO [StoreOpener-00c0b9125cfd04be97eeb4893c8c1908-1] regionserver.HStore(310): Store=00c0b9125cfd04be97eeb4893c8c1908/ut, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 18:15:45,240 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/unmovedTable/00c0b9125cfd04be97eeb4893c8c1908 2023-07-16 18:15:45,241 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/unmovedTable/00c0b9125cfd04be97eeb4893c8c1908 2023-07-16 18:15:45,244 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 00c0b9125cfd04be97eeb4893c8c1908 2023-07-16 18:15:45,245 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 00c0b9125cfd04be97eeb4893c8c1908; next sequenceid=5; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10455558720, jitterRate=-0.026250213384628296}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 18:15:45,245 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 00c0b9125cfd04be97eeb4893c8c1908: 2023-07-16 18:15:45,246 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for unmovedTable,,1689531344136.00c0b9125cfd04be97eeb4893c8c1908., pid=125, masterSystemTime=1689531345227 2023-07-16 18:15:45,247 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for unmovedTable,,1689531344136.00c0b9125cfd04be97eeb4893c8c1908. 2023-07-16 18:15:45,247 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened unmovedTable,,1689531344136.00c0b9125cfd04be97eeb4893c8c1908. 2023-07-16 18:15:45,248 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=123 updating hbase:meta row=00c0b9125cfd04be97eeb4893c8c1908, regionState=OPEN, openSeqNum=5, regionLocation=jenkins-hbase4.apache.org,43375,1689531323422 2023-07-16 18:15:45,248 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"unmovedTable,,1689531344136.00c0b9125cfd04be97eeb4893c8c1908.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689531345248"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689531345248"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689531345248"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689531345248"}]},"ts":"1689531345248"} 2023-07-16 18:15:45,251 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=125, resume processing ppid=123 2023-07-16 18:15:45,251 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=125, ppid=123, state=SUCCESS; OpenRegionProcedure 00c0b9125cfd04be97eeb4893c8c1908, server=jenkins-hbase4.apache.org,43375,1689531323422 in 174 msec 2023-07-16 18:15:45,252 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=123, state=SUCCESS; TransitRegionStateProcedure table=unmovedTable, region=00c0b9125cfd04be97eeb4893c8c1908, REOPEN/MOVE in 496 msec 2023-07-16 18:15:45,756 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] procedure.ProcedureSyncWait(216): waitFor pid=123 2023-07-16 18:15:45,756 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminServer(369): All regions from table(s) [unmovedTable] moved to target group normal. 2023-07-16 18:15:45,756 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 18:15:45,759 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 18:15:45,759 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 18:15:45,762 INFO [Listener at localhost/38073] hbase.Waiter(180): Waiting up to [1,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 18:15:45,763 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=unmovedTable 2023-07-16 18:15:45,763 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-16 18:15:45,764 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=normal 2023-07-16 18:15:45,764 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 18:15:45,765 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=unmovedTable 2023-07-16 18:15:45,765 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-16 18:15:45,766 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(408): Client=jenkins//172.31.14.131 rename rsgroup from oldgroup to newgroup 2023-07-16 18:15:45,769 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-16 18:15:45,770 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 18:15:45,770 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 18:15:45,770 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-16 18:15:45,772 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 9 2023-07-16 18:15:45,774 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RenameRSGroup 2023-07-16 18:15:45,777 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 18:15:45,778 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 18:15:45,781 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=newgroup 2023-07-16 18:15:45,781 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 18:15:45,782 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=testRename 2023-07-16 18:15:45,782 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-16 18:15:45,783 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=unmovedTable 2023-07-16 18:15:45,783 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-16 18:15:45,787 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 18:15:45,787 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 18:15:45,790 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [unmovedTable] to rsgroup default 2023-07-16 18:15:45,793 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-16 18:15:45,793 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 18:15:45,794 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 18:15:45,795 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-16 18:15:45,795 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-16 18:15:45,801 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminServer(339): Moving region(s) for table unmovedTable to RSGroup default 2023-07-16 18:15:45,801 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminServer(345): Moving region 00c0b9125cfd04be97eeb4893c8c1908 to RSGroup default 2023-07-16 18:15:45,802 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] procedure2.ProcedureExecutor(1029): Stored pid=126, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=00c0b9125cfd04be97eeb4893c8c1908, REOPEN/MOVE 2023-07-16 18:15:45,802 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-16 18:15:45,802 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=126, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=unmovedTable, region=00c0b9125cfd04be97eeb4893c8c1908, REOPEN/MOVE 2023-07-16 18:15:45,803 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=126 updating hbase:meta row=00c0b9125cfd04be97eeb4893c8c1908, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,43375,1689531323422 2023-07-16 18:15:45,803 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689531344136.00c0b9125cfd04be97eeb4893c8c1908.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689531345803"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689531345803"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689531345803"}]},"ts":"1689531345803"} 2023-07-16 18:15:45,804 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=127, ppid=126, state=RUNNABLE; CloseRegionProcedure 00c0b9125cfd04be97eeb4893c8c1908, server=jenkins-hbase4.apache.org,43375,1689531323422}] 2023-07-16 18:15:45,958 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 00c0b9125cfd04be97eeb4893c8c1908 2023-07-16 18:15:45,959 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 00c0b9125cfd04be97eeb4893c8c1908, disabling compactions & flushes 2023-07-16 18:15:45,959 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region unmovedTable,,1689531344136.00c0b9125cfd04be97eeb4893c8c1908. 2023-07-16 18:15:45,959 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1689531344136.00c0b9125cfd04be97eeb4893c8c1908. 2023-07-16 18:15:45,959 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1689531344136.00c0b9125cfd04be97eeb4893c8c1908. after waiting 0 ms 2023-07-16 18:15:45,959 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1689531344136.00c0b9125cfd04be97eeb4893c8c1908. 2023-07-16 18:15:45,967 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/unmovedTable/00c0b9125cfd04be97eeb4893c8c1908/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-16 18:15:45,968 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed unmovedTable,,1689531344136.00c0b9125cfd04be97eeb4893c8c1908. 2023-07-16 18:15:45,968 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 00c0b9125cfd04be97eeb4893c8c1908: 2023-07-16 18:15:45,968 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 00c0b9125cfd04be97eeb4893c8c1908 move to jenkins-hbase4.apache.org,44563,1689531327107 record at close sequenceid=5 2023-07-16 18:15:45,976 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 00c0b9125cfd04be97eeb4893c8c1908 2023-07-16 18:15:45,976 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=126 updating hbase:meta row=00c0b9125cfd04be97eeb4893c8c1908, regionState=CLOSED 2023-07-16 18:15:45,976 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"unmovedTable,,1689531344136.00c0b9125cfd04be97eeb4893c8c1908.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689531345976"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689531345976"}]},"ts":"1689531345976"} 2023-07-16 18:15:45,979 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=127, resume processing ppid=126 2023-07-16 18:15:45,980 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=127, ppid=126, state=SUCCESS; CloseRegionProcedure 00c0b9125cfd04be97eeb4893c8c1908, server=jenkins-hbase4.apache.org,43375,1689531323422 in 174 msec 2023-07-16 18:15:45,980 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=126, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=unmovedTable, region=00c0b9125cfd04be97eeb4893c8c1908, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,44563,1689531327107; forceNewPlan=false, retain=false 2023-07-16 18:15:46,131 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=126 updating hbase:meta row=00c0b9125cfd04be97eeb4893c8c1908, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,44563,1689531327107 2023-07-16 18:15:46,131 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"unmovedTable,,1689531344136.00c0b9125cfd04be97eeb4893c8c1908.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689531346131"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689531346131"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689531346131"}]},"ts":"1689531346131"} 2023-07-16 18:15:46,133 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=128, ppid=126, state=RUNNABLE; OpenRegionProcedure 00c0b9125cfd04be97eeb4893c8c1908, server=jenkins-hbase4.apache.org,44563,1689531327107}] 2023-07-16 18:15:46,288 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open unmovedTable,,1689531344136.00c0b9125cfd04be97eeb4893c8c1908. 2023-07-16 18:15:46,288 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 00c0b9125cfd04be97eeb4893c8c1908, NAME => 'unmovedTable,,1689531344136.00c0b9125cfd04be97eeb4893c8c1908.', STARTKEY => '', ENDKEY => ''} 2023-07-16 18:15:46,289 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table unmovedTable 00c0b9125cfd04be97eeb4893c8c1908 2023-07-16 18:15:46,289 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated unmovedTable,,1689531344136.00c0b9125cfd04be97eeb4893c8c1908.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 18:15:46,289 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 00c0b9125cfd04be97eeb4893c8c1908 2023-07-16 18:15:46,289 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 00c0b9125cfd04be97eeb4893c8c1908 2023-07-16 18:15:46,290 INFO [StoreOpener-00c0b9125cfd04be97eeb4893c8c1908-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family ut of region 00c0b9125cfd04be97eeb4893c8c1908 2023-07-16 18:15:46,291 DEBUG [StoreOpener-00c0b9125cfd04be97eeb4893c8c1908-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/unmovedTable/00c0b9125cfd04be97eeb4893c8c1908/ut 2023-07-16 18:15:46,291 DEBUG [StoreOpener-00c0b9125cfd04be97eeb4893c8c1908-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/unmovedTable/00c0b9125cfd04be97eeb4893c8c1908/ut 2023-07-16 18:15:46,292 INFO [StoreOpener-00c0b9125cfd04be97eeb4893c8c1908-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 00c0b9125cfd04be97eeb4893c8c1908 columnFamilyName ut 2023-07-16 18:15:46,293 INFO [StoreOpener-00c0b9125cfd04be97eeb4893c8c1908-1] regionserver.HStore(310): Store=00c0b9125cfd04be97eeb4893c8c1908/ut, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 18:15:46,294 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/unmovedTable/00c0b9125cfd04be97eeb4893c8c1908 2023-07-16 18:15:46,295 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/unmovedTable/00c0b9125cfd04be97eeb4893c8c1908 2023-07-16 18:15:46,298 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 00c0b9125cfd04be97eeb4893c8c1908 2023-07-16 18:15:46,299 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 00c0b9125cfd04be97eeb4893c8c1908; next sequenceid=8; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11069621440, jitterRate=0.030938833951950073}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 18:15:46,300 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 00c0b9125cfd04be97eeb4893c8c1908: 2023-07-16 18:15:46,300 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for unmovedTable,,1689531344136.00c0b9125cfd04be97eeb4893c8c1908., pid=128, masterSystemTime=1689531346284 2023-07-16 18:15:46,302 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for unmovedTable,,1689531344136.00c0b9125cfd04be97eeb4893c8c1908. 2023-07-16 18:15:46,302 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened unmovedTable,,1689531344136.00c0b9125cfd04be97eeb4893c8c1908. 2023-07-16 18:15:46,302 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=126 updating hbase:meta row=00c0b9125cfd04be97eeb4893c8c1908, regionState=OPEN, openSeqNum=8, regionLocation=jenkins-hbase4.apache.org,44563,1689531327107 2023-07-16 18:15:46,302 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"unmovedTable,,1689531344136.00c0b9125cfd04be97eeb4893c8c1908.","families":{"info":[{"qualifier":"regioninfo","vlen":46,"tag":[],"timestamp":"1689531346302"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689531346302"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689531346302"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689531346302"}]},"ts":"1689531346302"} 2023-07-16 18:15:46,306 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=128, resume processing ppid=126 2023-07-16 18:15:46,306 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=128, ppid=126, state=SUCCESS; OpenRegionProcedure 00c0b9125cfd04be97eeb4893c8c1908, server=jenkins-hbase4.apache.org,44563,1689531327107 in 171 msec 2023-07-16 18:15:46,308 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=126, state=SUCCESS; TransitRegionStateProcedure table=unmovedTable, region=00c0b9125cfd04be97eeb4893c8c1908, REOPEN/MOVE in 505 msec 2023-07-16 18:15:46,802 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] procedure.ProcedureSyncWait(216): waitFor pid=126 2023-07-16 18:15:46,802 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminServer(369): All regions from table(s) [unmovedTable] moved to target group default. 2023-07-16 18:15:46,802 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 18:15:46,804 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:43375] to rsgroup default 2023-07-16 18:15:46,806 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/normal 2023-07-16 18:15:46,807 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 18:15:46,807 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 18:15:46,808 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-16 18:15:46,808 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-16 18:15:46,810 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group normal, current retry=0 2023-07-16 18:15:46,810 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,43375,1689531323422] are moved back to normal 2023-07-16 18:15:46,810 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminServer(438): Move servers done: normal => default 2023-07-16 18:15:46,810 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 18:15:46,811 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup normal 2023-07-16 18:15:46,815 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 18:15:46,815 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 18:15:46,815 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-16 18:15:46,816 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 7 2023-07-16 18:15:46,817 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-16 18:15:46,818 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-16 18:15:46,818 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-16 18:15:46,818 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 18:15:46,819 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-16 18:15:46,819 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 18:15:46,819 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-16 18:15:46,823 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 18:15:46,823 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-16 18:15:46,824 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-16 18:15:46,825 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-16 18:15:46,828 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [testRename] to rsgroup default 2023-07-16 18:15:46,834 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 18:15:46,835 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-16 18:15:46,835 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-16 18:15:46,838 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminServer(339): Moving region(s) for table testRename to RSGroup default 2023-07-16 18:15:46,838 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminServer(345): Moving region 169e52c14d3c900a0913d50e0cfad311 to RSGroup default 2023-07-16 18:15:46,839 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] procedure2.ProcedureExecutor(1029): Stored pid=129, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=169e52c14d3c900a0913d50e0cfad311, REOPEN/MOVE 2023-07-16 18:15:46,839 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminServer(286): Moving 1 region(s) to group default, current retry=0 2023-07-16 18:15:46,839 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=129, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=testRename, region=169e52c14d3c900a0913d50e0cfad311, REOPEN/MOVE 2023-07-16 18:15:46,840 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=129 updating hbase:meta row=169e52c14d3c900a0913d50e0cfad311, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,33809,1689531323219 2023-07-16 18:15:46,840 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689531342473.169e52c14d3c900a0913d50e0cfad311.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689531346840"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689531346840"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689531346840"}]},"ts":"1689531346840"} 2023-07-16 18:15:46,842 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=130, ppid=129, state=RUNNABLE; CloseRegionProcedure 169e52c14d3c900a0913d50e0cfad311, server=jenkins-hbase4.apache.org,33809,1689531323219}] 2023-07-16 18:15:46,870 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-hbase.properties,hadoop-metrics2.properties 2023-07-16 18:15:46,995 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 169e52c14d3c900a0913d50e0cfad311 2023-07-16 18:15:46,996 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 169e52c14d3c900a0913d50e0cfad311, disabling compactions & flushes 2023-07-16 18:15:46,996 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region testRename,,1689531342473.169e52c14d3c900a0913d50e0cfad311. 2023-07-16 18:15:46,996 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1689531342473.169e52c14d3c900a0913d50e0cfad311. 2023-07-16 18:15:46,996 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1689531342473.169e52c14d3c900a0913d50e0cfad311. after waiting 0 ms 2023-07-16 18:15:46,996 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1689531342473.169e52c14d3c900a0913d50e0cfad311. 2023-07-16 18:15:47,000 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/testRename/169e52c14d3c900a0913d50e0cfad311/recovered.edits/7.seqid, newMaxSeqId=7, maxSeqId=4 2023-07-16 18:15:47,004 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed testRename,,1689531342473.169e52c14d3c900a0913d50e0cfad311. 2023-07-16 18:15:47,004 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 169e52c14d3c900a0913d50e0cfad311: 2023-07-16 18:15:47,004 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(3513): Adding 169e52c14d3c900a0913d50e0cfad311 move to jenkins-hbase4.apache.org,43375,1689531323422 record at close sequenceid=5 2023-07-16 18:15:47,006 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 169e52c14d3c900a0913d50e0cfad311 2023-07-16 18:15:47,007 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=129 updating hbase:meta row=169e52c14d3c900a0913d50e0cfad311, regionState=CLOSED 2023-07-16 18:15:47,007 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"testRename,,1689531342473.169e52c14d3c900a0913d50e0cfad311.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689531347007"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689531347007"}]},"ts":"1689531347007"} 2023-07-16 18:15:47,011 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=130, resume processing ppid=129 2023-07-16 18:15:47,011 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=130, ppid=129, state=SUCCESS; CloseRegionProcedure 169e52c14d3c900a0913d50e0cfad311, server=jenkins-hbase4.apache.org,33809,1689531323219 in 167 msec 2023-07-16 18:15:47,012 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=129, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=testRename, region=169e52c14d3c900a0913d50e0cfad311, REOPEN/MOVE; state=CLOSED, location=jenkins-hbase4.apache.org,43375,1689531323422; forceNewPlan=false, retain=false 2023-07-16 18:15:47,162 INFO [jenkins-hbase4:45445] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-16 18:15:47,163 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=129 updating hbase:meta row=169e52c14d3c900a0913d50e0cfad311, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43375,1689531323422 2023-07-16 18:15:47,163 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"testRename,,1689531342473.169e52c14d3c900a0913d50e0cfad311.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689531347163"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689531347163"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689531347163"}]},"ts":"1689531347163"} 2023-07-16 18:15:47,165 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=131, ppid=129, state=RUNNABLE; OpenRegionProcedure 169e52c14d3c900a0913d50e0cfad311, server=jenkins-hbase4.apache.org,43375,1689531323422}] 2023-07-16 18:15:47,320 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open testRename,,1689531342473.169e52c14d3c900a0913d50e0cfad311. 2023-07-16 18:15:47,320 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 169e52c14d3c900a0913d50e0cfad311, NAME => 'testRename,,1689531342473.169e52c14d3c900a0913d50e0cfad311.', STARTKEY => '', ENDKEY => ''} 2023-07-16 18:15:47,321 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table testRename 169e52c14d3c900a0913d50e0cfad311 2023-07-16 18:15:47,321 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated testRename,,1689531342473.169e52c14d3c900a0913d50e0cfad311.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 18:15:47,321 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 169e52c14d3c900a0913d50e0cfad311 2023-07-16 18:15:47,321 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 169e52c14d3c900a0913d50e0cfad311 2023-07-16 18:15:47,322 INFO [StoreOpener-169e52c14d3c900a0913d50e0cfad311-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family tr of region 169e52c14d3c900a0913d50e0cfad311 2023-07-16 18:15:47,323 DEBUG [StoreOpener-169e52c14d3c900a0913d50e0cfad311-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/testRename/169e52c14d3c900a0913d50e0cfad311/tr 2023-07-16 18:15:47,323 DEBUG [StoreOpener-169e52c14d3c900a0913d50e0cfad311-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/testRename/169e52c14d3c900a0913d50e0cfad311/tr 2023-07-16 18:15:47,324 INFO [StoreOpener-169e52c14d3c900a0913d50e0cfad311-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 169e52c14d3c900a0913d50e0cfad311 columnFamilyName tr 2023-07-16 18:15:47,325 INFO [StoreOpener-169e52c14d3c900a0913d50e0cfad311-1] regionserver.HStore(310): Store=169e52c14d3c900a0913d50e0cfad311/tr, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 18:15:47,325 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/testRename/169e52c14d3c900a0913d50e0cfad311 2023-07-16 18:15:47,327 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/testRename/169e52c14d3c900a0913d50e0cfad311 2023-07-16 18:15:47,331 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 169e52c14d3c900a0913d50e0cfad311 2023-07-16 18:15:47,332 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 169e52c14d3c900a0913d50e0cfad311; next sequenceid=8; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10031744320, jitterRate=-0.06572100520133972}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 18:15:47,332 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 169e52c14d3c900a0913d50e0cfad311: 2023-07-16 18:15:47,333 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for testRename,,1689531342473.169e52c14d3c900a0913d50e0cfad311., pid=131, masterSystemTime=1689531347317 2023-07-16 18:15:47,335 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for testRename,,1689531342473.169e52c14d3c900a0913d50e0cfad311. 2023-07-16 18:15:47,335 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened testRename,,1689531342473.169e52c14d3c900a0913d50e0cfad311. 2023-07-16 18:15:47,335 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=129 updating hbase:meta row=169e52c14d3c900a0913d50e0cfad311, regionState=OPEN, openSeqNum=8, regionLocation=jenkins-hbase4.apache.org,43375,1689531323422 2023-07-16 18:15:47,335 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"testRename,,1689531342473.169e52c14d3c900a0913d50e0cfad311.","families":{"info":[{"qualifier":"regioninfo","vlen":44,"tag":[],"timestamp":"1689531347335"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689531347335"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689531347335"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689531347335"}]},"ts":"1689531347335"} 2023-07-16 18:15:47,339 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=131, resume processing ppid=129 2023-07-16 18:15:47,339 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=131, ppid=129, state=SUCCESS; OpenRegionProcedure 169e52c14d3c900a0913d50e0cfad311, server=jenkins-hbase4.apache.org,43375,1689531323422 in 172 msec 2023-07-16 18:15:47,340 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=129, state=SUCCESS; TransitRegionStateProcedure table=testRename, region=169e52c14d3c900a0913d50e0cfad311, REOPEN/MOVE in 500 msec 2023-07-16 18:15:47,840 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] procedure.ProcedureSyncWait(216): waitFor pid=129 2023-07-16 18:15:47,840 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminServer(369): All regions from table(s) [testRename] moved to target group default. 2023-07-16 18:15:47,840 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 18:15:47,841 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:33809, jenkins-hbase4.apache.org:41927] to rsgroup default 2023-07-16 18:15:47,843 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 18:15:47,844 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/newgroup 2023-07-16 18:15:47,844 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-16 18:15:47,846 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group newgroup, current retry=0 2023-07-16 18:15:47,846 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,33809,1689531323219, jenkins-hbase4.apache.org,41927,1689531323590] are moved back to newgroup 2023-07-16 18:15:47,846 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminServer(438): Move servers done: newgroup => default 2023-07-16 18:15:47,846 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 18:15:47,846 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup newgroup 2023-07-16 18:15:47,850 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 18:15:47,851 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-16 18:15:47,852 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-16 18:15:47,855 INFO [Listener at localhost/38073] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-16 18:15:47,856 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-16 18:15:47,858 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 18:15:47,858 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 18:15:47,860 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-16 18:15:47,866 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-16 18:15:47,869 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 18:15:47,869 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 18:15:47,871 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:45445] to rsgroup master 2023-07-16 18:15:47,871 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45445 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 18:15:47,871 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] ipc.CallRunner(144): callId: 760 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:45244 deadline: 1689532547871, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45445 is either offline or it does not exist. 2023-07-16 18:15:47,871 WARN [Listener at localhost/38073] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45445 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor57.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45445 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-16 18:15:47,873 INFO [Listener at localhost/38073] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 18:15:47,874 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 18:15:47,874 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 18:15:47,874 INFO [Listener at localhost/38073] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33809, jenkins-hbase4.apache.org:41927, jenkins-hbase4.apache.org:43375, jenkins-hbase4.apache.org:44563], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-16 18:15:47,875 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-16 18:15:47,875 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 18:15:47,892 INFO [Listener at localhost/38073] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testRenameRSGroup Thread=508 (was 516), OpenFileDescriptor=779 (was 809), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=409 (was 401) - SystemLoadAverage LEAK? -, ProcessCount=173 (was 173), AvailableMemoryMB=2859 (was 2991) 2023-07-16 18:15:47,892 WARN [Listener at localhost/38073] hbase.ResourceChecker(130): Thread=508 is superior to 500 2023-07-16 18:15:47,909 INFO [Listener at localhost/38073] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testBogusArgs Thread=508, OpenFileDescriptor=779, MaxFileDescriptor=60000, SystemLoadAverage=409, ProcessCount=173, AvailableMemoryMB=2859 2023-07-16 18:15:47,909 WARN [Listener at localhost/38073] hbase.ResourceChecker(130): Thread=508 is superior to 500 2023-07-16 18:15:47,909 INFO [Listener at localhost/38073] rsgroup.TestRSGroupsBase(132): testBogusArgs 2023-07-16 18:15:47,913 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 18:15:47,913 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 18:15:47,913 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-16 18:15:47,913 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-16 18:15:47,914 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 18:15:47,914 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-16 18:15:47,914 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 18:15:47,915 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-16 18:15:47,918 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 18:15:47,918 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-16 18:15:47,920 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-16 18:15:47,922 INFO [Listener at localhost/38073] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-16 18:15:47,922 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-16 18:15:47,924 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 18:15:47,924 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 18:15:47,926 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-16 18:15:47,927 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-16 18:15:47,929 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 18:15:47,930 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 18:15:47,931 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:45445] to rsgroup master 2023-07-16 18:15:47,932 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45445 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 18:15:47,932 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] ipc.CallRunner(144): callId: 788 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:45244 deadline: 1689532547931, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45445 is either offline or it does not exist. 2023-07-16 18:15:47,932 WARN [Listener at localhost/38073] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45445 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor57.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45445 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-16 18:15:47,934 INFO [Listener at localhost/38073] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 18:15:47,934 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 18:15:47,934 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 18:15:47,934 INFO [Listener at localhost/38073] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33809, jenkins-hbase4.apache.org:41927, jenkins-hbase4.apache.org:43375, jenkins-hbase4.apache.org:44563], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-16 18:15:47,935 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-16 18:15:47,935 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 18:15:47,936 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=nonexistent 2023-07-16 18:15:47,936 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-16 18:15:47,941 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(334): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, server=bogus:123 2023-07-16 18:15:47,941 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfServer 2023-07-16 18:15:47,942 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=bogus 2023-07-16 18:15:47,942 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 18:15:47,942 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup bogus 2023-07-16 18:15:47,942 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bogus does not exist at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:486) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 18:15:47,942 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] ipc.CallRunner(144): callId: 800 service: MasterService methodName: ExecMasterService size: 87 connection: 172.31.14.131:45244 deadline: 1689532547942, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup bogus does not exist 2023-07-16 18:15:47,944 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [bogus:123] to rsgroup bogus 2023-07-16 18:15:47,944 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.getAndCheckRSGroupInfo(RSGroupAdminServer.java:115) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:398) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 18:15:47,945 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] ipc.CallRunner(144): callId: 803 service: MasterService methodName: ExecMasterService size: 96 connection: 172.31.14.131:45244 deadline: 1689532547944, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus 2023-07-16 18:15:47,947 DEBUG [Listener at localhost/38073-EventThread] zookeeper.ZKWatcher(600): master:45445-0x1016f588ace0000, quorum=127.0.0.1:53498, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/balancer 2023-07-16 18:15:47,948 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=true 2023-07-16 18:15:47,953 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(292): Client=jenkins//172.31.14.131 balance rsgroup, group=bogus 2023-07-16 18:15:47,954 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.balanceRSGroup(RSGroupAdminServer.java:523) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.balanceRSGroup(RSGroupAdminEndpoint.java:299) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16213) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 18:15:47,954 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] ipc.CallRunner(144): callId: 807 service: MasterService methodName: ExecMasterService size: 88 connection: 172.31.14.131:45244 deadline: 1689532547952, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup does not exist: bogus 2023-07-16 18:15:47,958 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 18:15:47,958 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 18:15:47,960 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-16 18:15:47,960 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-16 18:15:47,960 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 18:15:47,960 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-16 18:15:47,961 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 18:15:47,961 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-16 18:15:47,966 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 18:15:47,966 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-16 18:15:47,968 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-16 18:15:47,972 INFO [Listener at localhost/38073] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-16 18:15:47,973 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-16 18:15:47,975 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 18:15:47,976 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 18:15:47,978 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-16 18:15:47,979 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-16 18:15:47,982 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 18:15:47,982 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 18:15:47,984 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:45445] to rsgroup master 2023-07-16 18:15:47,989 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45445 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 18:15:47,989 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] ipc.CallRunner(144): callId: 831 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:45244 deadline: 1689532547984, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45445 is either offline or it does not exist. 2023-07-16 18:15:47,989 WARN [Listener at localhost/38073] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45445 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor57.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45445 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-16 18:15:47,991 INFO [Listener at localhost/38073] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 18:15:47,992 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 18:15:47,992 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 18:15:47,992 INFO [Listener at localhost/38073] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33809, jenkins-hbase4.apache.org:41927, jenkins-hbase4.apache.org:43375, jenkins-hbase4.apache.org:44563], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-16 18:15:47,993 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-16 18:15:47,993 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 18:15:48,012 INFO [Listener at localhost/38073] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testBogusArgs Thread=512 (was 508) Potentially hanging thread: hconnection-0x67375dfc-shared-pool-23 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x13eab7c0-shared-pool-25 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x13eab7c0-shared-pool-26 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x67375dfc-shared-pool-24 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=779 (was 779), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=409 (was 409), ProcessCount=173 (was 173), AvailableMemoryMB=2859 (was 2859) 2023-07-16 18:15:48,012 WARN [Listener at localhost/38073] hbase.ResourceChecker(130): Thread=512 is superior to 500 2023-07-16 18:15:48,030 INFO [Listener at localhost/38073] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testDisabledTableMove Thread=512, OpenFileDescriptor=779, MaxFileDescriptor=60000, SystemLoadAverage=409, ProcessCount=173, AvailableMemoryMB=2858 2023-07-16 18:15:48,030 WARN [Listener at localhost/38073] hbase.ResourceChecker(130): Thread=512 is superior to 500 2023-07-16 18:15:48,031 INFO [Listener at localhost/38073] rsgroup.TestRSGroupsBase(132): testDisabledTableMove 2023-07-16 18:15:48,034 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 18:15:48,034 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 18:15:48,035 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-16 18:15:48,035 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-16 18:15:48,035 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 18:15:48,036 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-16 18:15:48,036 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 18:15:48,037 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-16 18:15:48,040 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 18:15:48,040 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-16 18:15:48,043 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-16 18:15:48,045 INFO [Listener at localhost/38073] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-16 18:15:48,046 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-16 18:15:48,048 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 18:15:48,048 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 18:15:48,049 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-16 18:15:48,051 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-16 18:15:48,053 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 18:15:48,053 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 18:15:48,055 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:45445] to rsgroup master 2023-07-16 18:15:48,056 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45445 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 18:15:48,056 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] ipc.CallRunner(144): callId: 859 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:45244 deadline: 1689532548055, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45445 is either offline or it does not exist. 2023-07-16 18:15:48,056 WARN [Listener at localhost/38073] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45445 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor57.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45445 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-16 18:15:48,058 INFO [Listener at localhost/38073] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 18:15:48,058 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 18:15:48,058 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 18:15:48,058 INFO [Listener at localhost/38073] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33809, jenkins-hbase4.apache.org:41927, jenkins-hbase4.apache.org:43375, jenkins-hbase4.apache.org:44563], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-16 18:15:48,059 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-16 18:15:48,059 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 18:15:48,060 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-16 18:15:48,060 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 18:15:48,061 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_testDisabledTableMove_1913902319 2023-07-16 18:15:48,063 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_1913902319 2023-07-16 18:15:48,068 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 18:15:48,069 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 18:15:48,069 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-16 18:15:48,070 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-16 18:15:48,072 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 18:15:48,073 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 18:15:48,075 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:33809, jenkins-hbase4.apache.org:41927] to rsgroup Group_testDisabledTableMove_1913902319 2023-07-16 18:15:48,076 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 18:15:48,077 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_1913902319 2023-07-16 18:15:48,077 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 18:15:48,077 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-16 18:15:48,079 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group default, current retry=0 2023-07-16 18:15:48,079 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,33809,1689531323219, jenkins-hbase4.apache.org,41927,1689531323590] are moved back to default 2023-07-16 18:15:48,079 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminServer(438): Move servers done: default => Group_testDisabledTableMove_1913902319 2023-07-16 18:15:48,079 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 18:15:48,081 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 18:15:48,081 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 18:15:48,083 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=Group_testDisabledTableMove_1913902319 2023-07-16 18:15:48,083 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 18:15:48,084 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-16 18:15:48,085 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] procedure2.ProcedureExecutor(1029): Stored pid=132, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=Group_testDisabledTableMove 2023-07-16 18:15:48,087 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=132, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_PRE_OPERATION 2023-07-16 18:15:48,087 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "Group_testDisabledTableMove" procId is: 132 2023-07-16 18:15:48,087 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(1230): Checking to see if procedure is done pid=132 2023-07-16 18:15:48,088 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 18:15:48,089 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_1913902319 2023-07-16 18:15:48,089 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 18:15:48,089 DEBUG [PEWorker-5] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-16 18:15:48,091 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=132, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-16 18:15:48,094 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/default/Group_testDisabledTableMove/32768011504f3ab8fbbacad128ad4962 2023-07-16 18:15:48,095 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/default/Group_testDisabledTableMove/cf10f6f55144dd8d51d17c763f5b8b1d 2023-07-16 18:15:48,095 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/default/Group_testDisabledTableMove/b578839f3a20dd0d360d0ec8e00c1025 2023-07-16 18:15:48,095 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/default/Group_testDisabledTableMove/504a5498e0d439e6682c838bbf9a1724 2023-07-16 18:15:48,095 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/default/Group_testDisabledTableMove/590cf672ade80548ba38a35ea3382a7b 2023-07-16 18:15:48,095 DEBUG [HFileArchiver-2] backup.HFileArchiver(153): Directory hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/default/Group_testDisabledTableMove/32768011504f3ab8fbbacad128ad4962 empty. 2023-07-16 18:15:48,095 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/default/Group_testDisabledTableMove/b578839f3a20dd0d360d0ec8e00c1025 empty. 2023-07-16 18:15:48,095 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/default/Group_testDisabledTableMove/590cf672ade80548ba38a35ea3382a7b empty. 2023-07-16 18:15:48,095 DEBUG [HFileArchiver-8] backup.HFileArchiver(153): Directory hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/default/Group_testDisabledTableMove/504a5498e0d439e6682c838bbf9a1724 empty. 2023-07-16 18:15:48,095 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/default/Group_testDisabledTableMove/cf10f6f55144dd8d51d17c763f5b8b1d empty. 2023-07-16 18:15:48,096 DEBUG [HFileArchiver-2] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/default/Group_testDisabledTableMove/32768011504f3ab8fbbacad128ad4962 2023-07-16 18:15:48,096 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/default/Group_testDisabledTableMove/b578839f3a20dd0d360d0ec8e00c1025 2023-07-16 18:15:48,096 DEBUG [HFileArchiver-8] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/default/Group_testDisabledTableMove/504a5498e0d439e6682c838bbf9a1724 2023-07-16 18:15:48,096 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/default/Group_testDisabledTableMove/590cf672ade80548ba38a35ea3382a7b 2023-07-16 18:15:48,096 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/default/Group_testDisabledTableMove/cf10f6f55144dd8d51d17c763f5b8b1d 2023-07-16 18:15:48,096 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived Group_testDisabledTableMove regions 2023-07-16 18:15:48,109 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/default/Group_testDisabledTableMove/.tabledesc/.tableinfo.0000000001 2023-07-16 18:15:48,111 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(7675): creating {ENCODED => cf10f6f55144dd8d51d17c763f5b8b1d, NAME => 'Group_testDisabledTableMove,aaaaa,1689531348084.cf10f6f55144dd8d51d17c763f5b8b1d.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp 2023-07-16 18:15:48,111 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(7675): creating {ENCODED => 590cf672ade80548ba38a35ea3382a7b, NAME => 'Group_testDisabledTableMove,i\xBF\x14i\xBE,1689531348084.590cf672ade80548ba38a35ea3382a7b.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp 2023-07-16 18:15:48,111 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(7675): creating {ENCODED => 32768011504f3ab8fbbacad128ad4962, NAME => 'Group_testDisabledTableMove,,1689531348084.32768011504f3ab8fbbacad128ad4962.', STARTKEY => '', ENDKEY => 'aaaaa'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp 2023-07-16 18:15:48,134 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,,1689531348084.32768011504f3ab8fbbacad128ad4962.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 18:15:48,134 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1604): Closing 32768011504f3ab8fbbacad128ad4962, disabling compactions & flushes 2023-07-16 18:15:48,134 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,,1689531348084.32768011504f3ab8fbbacad128ad4962. 2023-07-16 18:15:48,134 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,,1689531348084.32768011504f3ab8fbbacad128ad4962. 2023-07-16 18:15:48,134 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,i\xBF\x14i\xBE,1689531348084.590cf672ade80548ba38a35ea3382a7b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 18:15:48,134 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,,1689531348084.32768011504f3ab8fbbacad128ad4962. after waiting 0 ms 2023-07-16 18:15:48,134 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,,1689531348084.32768011504f3ab8fbbacad128ad4962. 2023-07-16 18:15:48,134 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1604): Closing 590cf672ade80548ba38a35ea3382a7b, disabling compactions & flushes 2023-07-16 18:15:48,134 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,,1689531348084.32768011504f3ab8fbbacad128ad4962. 2023-07-16 18:15:48,134 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,i\xBF\x14i\xBE,1689531348084.590cf672ade80548ba38a35ea3382a7b. 2023-07-16 18:15:48,134 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1558): Region close journal for 32768011504f3ab8fbbacad128ad4962: 2023-07-16 18:15:48,134 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1689531348084.590cf672ade80548ba38a35ea3382a7b. 2023-07-16 18:15:48,135 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1689531348084.590cf672ade80548ba38a35ea3382a7b. after waiting 0 ms 2023-07-16 18:15:48,135 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,i\xBF\x14i\xBE,1689531348084.590cf672ade80548ba38a35ea3382a7b. 2023-07-16 18:15:48,135 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(7675): creating {ENCODED => b578839f3a20dd0d360d0ec8e00c1025, NAME => 'Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689531348084.b578839f3a20dd0d360d0ec8e00c1025.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp 2023-07-16 18:15:48,135 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,i\xBF\x14i\xBE,1689531348084.590cf672ade80548ba38a35ea3382a7b. 2023-07-16 18:15:48,135 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1558): Region close journal for 590cf672ade80548ba38a35ea3382a7b: 2023-07-16 18:15:48,136 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(7675): creating {ENCODED => 504a5498e0d439e6682c838bbf9a1724, NAME => 'Group_testDisabledTableMove,zzzzz,1689531348084.504a5498e0d439e6682c838bbf9a1724.', STARTKEY => 'zzzzz', ENDKEY => ''}, tableDescriptor='Group_testDisabledTableMove', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'f', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp 2023-07-16 18:15:48,136 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,aaaaa,1689531348084.cf10f6f55144dd8d51d17c763f5b8b1d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 18:15:48,136 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1604): Closing cf10f6f55144dd8d51d17c763f5b8b1d, disabling compactions & flushes 2023-07-16 18:15:48,136 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,aaaaa,1689531348084.cf10f6f55144dd8d51d17c763f5b8b1d. 2023-07-16 18:15:48,136 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,aaaaa,1689531348084.cf10f6f55144dd8d51d17c763f5b8b1d. 2023-07-16 18:15:48,136 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,aaaaa,1689531348084.cf10f6f55144dd8d51d17c763f5b8b1d. after waiting 0 ms 2023-07-16 18:15:48,137 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,aaaaa,1689531348084.cf10f6f55144dd8d51d17c763f5b8b1d. 2023-07-16 18:15:48,137 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,aaaaa,1689531348084.cf10f6f55144dd8d51d17c763f5b8b1d. 2023-07-16 18:15:48,137 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-1] regionserver.HRegion(1558): Region close journal for cf10f6f55144dd8d51d17c763f5b8b1d: 2023-07-16 18:15:48,159 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689531348084.b578839f3a20dd0d360d0ec8e00c1025.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 18:15:48,160 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1604): Closing b578839f3a20dd0d360d0ec8e00c1025, disabling compactions & flushes 2023-07-16 18:15:48,160 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689531348084.b578839f3a20dd0d360d0ec8e00c1025. 2023-07-16 18:15:48,160 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689531348084.b578839f3a20dd0d360d0ec8e00c1025. 2023-07-16 18:15:48,160 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689531348084.b578839f3a20dd0d360d0ec8e00c1025. after waiting 0 ms 2023-07-16 18:15:48,160 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689531348084.b578839f3a20dd0d360d0ec8e00c1025. 2023-07-16 18:15:48,160 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689531348084.b578839f3a20dd0d360d0ec8e00c1025. 2023-07-16 18:15:48,160 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-0] regionserver.HRegion(1558): Region close journal for b578839f3a20dd0d360d0ec8e00c1025: 2023-07-16 18:15:48,188 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(1230): Checking to see if procedure is done pid=132 2023-07-16 18:15:48,390 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(1230): Checking to see if procedure is done pid=132 2023-07-16 18:15:48,560 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,zzzzz,1689531348084.504a5498e0d439e6682c838bbf9a1724.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 18:15:48,560 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1604): Closing 504a5498e0d439e6682c838bbf9a1724, disabling compactions & flushes 2023-07-16 18:15:48,560 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,zzzzz,1689531348084.504a5498e0d439e6682c838bbf9a1724. 2023-07-16 18:15:48,560 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,zzzzz,1689531348084.504a5498e0d439e6682c838bbf9a1724. 2023-07-16 18:15:48,560 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,zzzzz,1689531348084.504a5498e0d439e6682c838bbf9a1724. after waiting 0 ms 2023-07-16 18:15:48,560 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,zzzzz,1689531348084.504a5498e0d439e6682c838bbf9a1724. 2023-07-16 18:15:48,560 INFO [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,zzzzz,1689531348084.504a5498e0d439e6682c838bbf9a1724. 2023-07-16 18:15:48,560 DEBUG [RegionOpenAndInit-Group_testDisabledTableMove-pool-2] regionserver.HRegion(1558): Region close journal for 504a5498e0d439e6682c838bbf9a1724: 2023-07-16 18:15:48,563 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=132, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_ADD_TO_META 2023-07-16 18:15:48,564 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,,1689531348084.32768011504f3ab8fbbacad128ad4962.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689531348563"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689531348563"}]},"ts":"1689531348563"} 2023-07-16 18:15:48,564 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689531348084.590cf672ade80548ba38a35ea3382a7b.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689531348563"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689531348563"}]},"ts":"1689531348563"} 2023-07-16 18:15:48,564 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,aaaaa,1689531348084.cf10f6f55144dd8d51d17c763f5b8b1d.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689531348563"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689531348563"}]},"ts":"1689531348563"} 2023-07-16 18:15:48,564 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689531348084.b578839f3a20dd0d360d0ec8e00c1025.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689531348563"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689531348563"}]},"ts":"1689531348563"} 2023-07-16 18:15:48,564 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,zzzzz,1689531348084.504a5498e0d439e6682c838bbf9a1724.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689531348563"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689531348563"}]},"ts":"1689531348563"} 2023-07-16 18:15:48,566 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 5 regions to meta. 2023-07-16 18:15:48,567 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=132, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-16 18:15:48,567 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689531348567"}]},"ts":"1689531348567"} 2023-07-16 18:15:48,568 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=ENABLING in hbase:meta 2023-07-16 18:15:48,571 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-16 18:15:48,571 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-16 18:15:48,571 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-16 18:15:48,571 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-16 18:15:48,572 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=133, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=32768011504f3ab8fbbacad128ad4962, ASSIGN}, {pid=134, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=cf10f6f55144dd8d51d17c763f5b8b1d, ASSIGN}, {pid=135, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=590cf672ade80548ba38a35ea3382a7b, ASSIGN}, {pid=136, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=b578839f3a20dd0d360d0ec8e00c1025, ASSIGN}, {pid=137, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=504a5498e0d439e6682c838bbf9a1724, ASSIGN}] 2023-07-16 18:15:48,574 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=137, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=504a5498e0d439e6682c838bbf9a1724, ASSIGN 2023-07-16 18:15:48,574 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=135, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=590cf672ade80548ba38a35ea3382a7b, ASSIGN 2023-07-16 18:15:48,574 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=136, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=b578839f3a20dd0d360d0ec8e00c1025, ASSIGN 2023-07-16 18:15:48,574 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=134, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=cf10f6f55144dd8d51d17c763f5b8b1d, ASSIGN 2023-07-16 18:15:48,574 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=133, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=32768011504f3ab8fbbacad128ad4962, ASSIGN 2023-07-16 18:15:48,574 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=137, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=504a5498e0d439e6682c838bbf9a1724, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,43375,1689531323422; forceNewPlan=false, retain=false 2023-07-16 18:15:48,575 INFO [PEWorker-3] assignment.TransitRegionStateProcedure(193): Starting pid=135, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=590cf672ade80548ba38a35ea3382a7b, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,44563,1689531327107; forceNewPlan=false, retain=false 2023-07-16 18:15:48,575 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=134, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=cf10f6f55144dd8d51d17c763f5b8b1d, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,43375,1689531323422; forceNewPlan=false, retain=false 2023-07-16 18:15:48,575 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=136, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=b578839f3a20dd0d360d0ec8e00c1025, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,44563,1689531327107; forceNewPlan=false, retain=false 2023-07-16 18:15:48,575 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=133, ppid=132, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=32768011504f3ab8fbbacad128ad4962, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,43375,1689531323422; forceNewPlan=false, retain=false 2023-07-16 18:15:48,691 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(1230): Checking to see if procedure is done pid=132 2023-07-16 18:15:48,725 INFO [jenkins-hbase4:45445] balancer.BaseLoadBalancer(1545): Reassigned 5 regions. 5 retained the pre-restart assignment. 2023-07-16 18:15:48,728 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=134 updating hbase:meta row=cf10f6f55144dd8d51d17c763f5b8b1d, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43375,1689531323422 2023-07-16 18:15:48,728 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=135 updating hbase:meta row=590cf672ade80548ba38a35ea3382a7b, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,44563,1689531327107 2023-07-16 18:15:48,728 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=137 updating hbase:meta row=504a5498e0d439e6682c838bbf9a1724, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43375,1689531323422 2023-07-16 18:15:48,728 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=136 updating hbase:meta row=b578839f3a20dd0d360d0ec8e00c1025, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,44563,1689531327107 2023-07-16 18:15:48,728 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=133 updating hbase:meta row=32768011504f3ab8fbbacad128ad4962, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43375,1689531323422 2023-07-16 18:15:48,729 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689531348084.b578839f3a20dd0d360d0ec8e00c1025.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689531348728"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689531348728"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689531348728"}]},"ts":"1689531348728"} 2023-07-16 18:15:48,729 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,zzzzz,1689531348084.504a5498e0d439e6682c838bbf9a1724.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689531348728"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689531348728"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689531348728"}]},"ts":"1689531348728"} 2023-07-16 18:15:48,729 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689531348084.590cf672ade80548ba38a35ea3382a7b.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689531348728"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689531348728"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689531348728"}]},"ts":"1689531348728"} 2023-07-16 18:15:48,729 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,aaaaa,1689531348084.cf10f6f55144dd8d51d17c763f5b8b1d.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689531348728"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689531348728"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689531348728"}]},"ts":"1689531348728"} 2023-07-16 18:15:48,729 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,,1689531348084.32768011504f3ab8fbbacad128ad4962.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689531348728"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689531348728"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689531348728"}]},"ts":"1689531348728"} 2023-07-16 18:15:48,730 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=138, ppid=136, state=RUNNABLE; OpenRegionProcedure b578839f3a20dd0d360d0ec8e00c1025, server=jenkins-hbase4.apache.org,44563,1689531327107}] 2023-07-16 18:15:48,731 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=139, ppid=137, state=RUNNABLE; OpenRegionProcedure 504a5498e0d439e6682c838bbf9a1724, server=jenkins-hbase4.apache.org,43375,1689531323422}] 2023-07-16 18:15:48,731 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=140, ppid=135, state=RUNNABLE; OpenRegionProcedure 590cf672ade80548ba38a35ea3382a7b, server=jenkins-hbase4.apache.org,44563,1689531327107}] 2023-07-16 18:15:48,733 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=141, ppid=134, state=RUNNABLE; OpenRegionProcedure cf10f6f55144dd8d51d17c763f5b8b1d, server=jenkins-hbase4.apache.org,43375,1689531323422}] 2023-07-16 18:15:48,734 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=142, ppid=133, state=RUNNABLE; OpenRegionProcedure 32768011504f3ab8fbbacad128ad4962, server=jenkins-hbase4.apache.org,43375,1689531323422}] 2023-07-16 18:15:48,887 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,i\xBF\x14i\xBE,1689531348084.590cf672ade80548ba38a35ea3382a7b. 2023-07-16 18:15:48,887 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 590cf672ade80548ba38a35ea3382a7b, NAME => 'Group_testDisabledTableMove,i\xBF\x14i\xBE,1689531348084.590cf672ade80548ba38a35ea3382a7b.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'} 2023-07-16 18:15:48,887 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove 590cf672ade80548ba38a35ea3382a7b 2023-07-16 18:15:48,887 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,i\xBF\x14i\xBE,1689531348084.590cf672ade80548ba38a35ea3382a7b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 18:15:48,888 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 590cf672ade80548ba38a35ea3382a7b 2023-07-16 18:15:48,888 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 590cf672ade80548ba38a35ea3382a7b 2023-07-16 18:15:48,888 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,,1689531348084.32768011504f3ab8fbbacad128ad4962. 2023-07-16 18:15:48,888 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 32768011504f3ab8fbbacad128ad4962, NAME => 'Group_testDisabledTableMove,,1689531348084.32768011504f3ab8fbbacad128ad4962.', STARTKEY => '', ENDKEY => 'aaaaa'} 2023-07-16 18:15:48,888 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove 32768011504f3ab8fbbacad128ad4962 2023-07-16 18:15:48,888 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,,1689531348084.32768011504f3ab8fbbacad128ad4962.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 18:15:48,888 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 32768011504f3ab8fbbacad128ad4962 2023-07-16 18:15:48,888 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 32768011504f3ab8fbbacad128ad4962 2023-07-16 18:15:48,890 INFO [StoreOpener-32768011504f3ab8fbbacad128ad4962-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 32768011504f3ab8fbbacad128ad4962 2023-07-16 18:15:48,892 DEBUG [StoreOpener-32768011504f3ab8fbbacad128ad4962-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/Group_testDisabledTableMove/32768011504f3ab8fbbacad128ad4962/f 2023-07-16 18:15:48,892 DEBUG [StoreOpener-32768011504f3ab8fbbacad128ad4962-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/Group_testDisabledTableMove/32768011504f3ab8fbbacad128ad4962/f 2023-07-16 18:15:48,893 INFO [StoreOpener-32768011504f3ab8fbbacad128ad4962-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 32768011504f3ab8fbbacad128ad4962 columnFamilyName f 2023-07-16 18:15:48,893 INFO [StoreOpener-590cf672ade80548ba38a35ea3382a7b-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 590cf672ade80548ba38a35ea3382a7b 2023-07-16 18:15:48,894 INFO [StoreOpener-32768011504f3ab8fbbacad128ad4962-1] regionserver.HStore(310): Store=32768011504f3ab8fbbacad128ad4962/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 18:15:48,895 DEBUG [StoreOpener-590cf672ade80548ba38a35ea3382a7b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/Group_testDisabledTableMove/590cf672ade80548ba38a35ea3382a7b/f 2023-07-16 18:15:48,895 DEBUG [StoreOpener-590cf672ade80548ba38a35ea3382a7b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/Group_testDisabledTableMove/590cf672ade80548ba38a35ea3382a7b/f 2023-07-16 18:15:48,895 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/Group_testDisabledTableMove/32768011504f3ab8fbbacad128ad4962 2023-07-16 18:15:48,895 INFO [StoreOpener-590cf672ade80548ba38a35ea3382a7b-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 590cf672ade80548ba38a35ea3382a7b columnFamilyName f 2023-07-16 18:15:48,895 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/Group_testDisabledTableMove/32768011504f3ab8fbbacad128ad4962 2023-07-16 18:15:48,896 INFO [StoreOpener-590cf672ade80548ba38a35ea3382a7b-1] regionserver.HStore(310): Store=590cf672ade80548ba38a35ea3382a7b/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 18:15:48,897 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/Group_testDisabledTableMove/590cf672ade80548ba38a35ea3382a7b 2023-07-16 18:15:48,897 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/Group_testDisabledTableMove/590cf672ade80548ba38a35ea3382a7b 2023-07-16 18:15:48,898 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 32768011504f3ab8fbbacad128ad4962 2023-07-16 18:15:48,900 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 590cf672ade80548ba38a35ea3382a7b 2023-07-16 18:15:48,901 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/Group_testDisabledTableMove/32768011504f3ab8fbbacad128ad4962/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-16 18:15:48,901 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 32768011504f3ab8fbbacad128ad4962; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9630146560, jitterRate=-0.10312271118164062}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 18:15:48,901 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 32768011504f3ab8fbbacad128ad4962: 2023-07-16 18:15:48,902 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/Group_testDisabledTableMove/590cf672ade80548ba38a35ea3382a7b/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-16 18:15:48,902 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,,1689531348084.32768011504f3ab8fbbacad128ad4962., pid=142, masterSystemTime=1689531348883 2023-07-16 18:15:48,903 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 590cf672ade80548ba38a35ea3382a7b; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=9842693920, jitterRate=-0.08332769572734833}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 18:15:48,903 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 590cf672ade80548ba38a35ea3382a7b: 2023-07-16 18:15:48,903 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,i\xBF\x14i\xBE,1689531348084.590cf672ade80548ba38a35ea3382a7b., pid=140, masterSystemTime=1689531348882 2023-07-16 18:15:48,904 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,,1689531348084.32768011504f3ab8fbbacad128ad4962. 2023-07-16 18:15:48,904 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,,1689531348084.32768011504f3ab8fbbacad128ad4962. 2023-07-16 18:15:48,904 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,zzzzz,1689531348084.504a5498e0d439e6682c838bbf9a1724. 2023-07-16 18:15:48,904 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 504a5498e0d439e6682c838bbf9a1724, NAME => 'Group_testDisabledTableMove,zzzzz,1689531348084.504a5498e0d439e6682c838bbf9a1724.', STARTKEY => 'zzzzz', ENDKEY => ''} 2023-07-16 18:15:48,905 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove 504a5498e0d439e6682c838bbf9a1724 2023-07-16 18:15:48,905 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=133 updating hbase:meta row=32768011504f3ab8fbbacad128ad4962, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,43375,1689531323422 2023-07-16 18:15:48,905 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,i\xBF\x14i\xBE,1689531348084.590cf672ade80548ba38a35ea3382a7b. 2023-07-16 18:15:48,905 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,i\xBF\x14i\xBE,1689531348084.590cf672ade80548ba38a35ea3382a7b. 2023-07-16 18:15:48,905 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,,1689531348084.32768011504f3ab8fbbacad128ad4962.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689531348905"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689531348905"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689531348905"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689531348905"}]},"ts":"1689531348905"} 2023-07-16 18:15:48,905 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689531348084.b578839f3a20dd0d360d0ec8e00c1025. 2023-07-16 18:15:48,905 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => b578839f3a20dd0d360d0ec8e00c1025, NAME => 'Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689531348084.b578839f3a20dd0d360d0ec8e00c1025.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'} 2023-07-16 18:15:48,905 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove b578839f3a20dd0d360d0ec8e00c1025 2023-07-16 18:15:48,905 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=135 updating hbase:meta row=590cf672ade80548ba38a35ea3382a7b, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,44563,1689531327107 2023-07-16 18:15:48,905 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,zzzzz,1689531348084.504a5498e0d439e6682c838bbf9a1724.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 18:15:48,905 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689531348084.b578839f3a20dd0d360d0ec8e00c1025.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 18:15:48,906 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689531348084.590cf672ade80548ba38a35ea3382a7b.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689531348905"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689531348905"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689531348905"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689531348905"}]},"ts":"1689531348905"} 2023-07-16 18:15:48,906 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for b578839f3a20dd0d360d0ec8e00c1025 2023-07-16 18:15:48,906 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for b578839f3a20dd0d360d0ec8e00c1025 2023-07-16 18:15:48,906 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 504a5498e0d439e6682c838bbf9a1724 2023-07-16 18:15:48,906 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 504a5498e0d439e6682c838bbf9a1724 2023-07-16 18:15:48,908 INFO [StoreOpener-b578839f3a20dd0d360d0ec8e00c1025-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region b578839f3a20dd0d360d0ec8e00c1025 2023-07-16 18:15:48,909 INFO [StoreOpener-504a5498e0d439e6682c838bbf9a1724-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region 504a5498e0d439e6682c838bbf9a1724 2023-07-16 18:15:48,909 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=142, resume processing ppid=133 2023-07-16 18:15:48,909 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=142, ppid=133, state=SUCCESS; OpenRegionProcedure 32768011504f3ab8fbbacad128ad4962, server=jenkins-hbase4.apache.org,43375,1689531323422 in 173 msec 2023-07-16 18:15:48,910 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=140, resume processing ppid=135 2023-07-16 18:15:48,910 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=140, ppid=135, state=SUCCESS; OpenRegionProcedure 590cf672ade80548ba38a35ea3382a7b, server=jenkins-hbase4.apache.org,44563,1689531327107 in 177 msec 2023-07-16 18:15:48,910 DEBUG [StoreOpener-504a5498e0d439e6682c838bbf9a1724-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/Group_testDisabledTableMove/504a5498e0d439e6682c838bbf9a1724/f 2023-07-16 18:15:48,911 DEBUG [StoreOpener-504a5498e0d439e6682c838bbf9a1724-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/Group_testDisabledTableMove/504a5498e0d439e6682c838bbf9a1724/f 2023-07-16 18:15:48,911 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=133, ppid=132, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=32768011504f3ab8fbbacad128ad4962, ASSIGN in 337 msec 2023-07-16 18:15:48,911 INFO [StoreOpener-504a5498e0d439e6682c838bbf9a1724-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 504a5498e0d439e6682c838bbf9a1724 columnFamilyName f 2023-07-16 18:15:48,911 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=135, ppid=132, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=590cf672ade80548ba38a35ea3382a7b, ASSIGN in 338 msec 2023-07-16 18:15:48,911 DEBUG [StoreOpener-b578839f3a20dd0d360d0ec8e00c1025-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/Group_testDisabledTableMove/b578839f3a20dd0d360d0ec8e00c1025/f 2023-07-16 18:15:48,911 DEBUG [StoreOpener-b578839f3a20dd0d360d0ec8e00c1025-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/Group_testDisabledTableMove/b578839f3a20dd0d360d0ec8e00c1025/f 2023-07-16 18:15:48,912 INFO [StoreOpener-b578839f3a20dd0d360d0ec8e00c1025-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region b578839f3a20dd0d360d0ec8e00c1025 columnFamilyName f 2023-07-16 18:15:48,912 INFO [StoreOpener-504a5498e0d439e6682c838bbf9a1724-1] regionserver.HStore(310): Store=504a5498e0d439e6682c838bbf9a1724/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 18:15:48,913 INFO [StoreOpener-b578839f3a20dd0d360d0ec8e00c1025-1] regionserver.HStore(310): Store=b578839f3a20dd0d360d0ec8e00c1025/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 18:15:48,913 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/Group_testDisabledTableMove/504a5498e0d439e6682c838bbf9a1724 2023-07-16 18:15:48,914 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/Group_testDisabledTableMove/b578839f3a20dd0d360d0ec8e00c1025 2023-07-16 18:15:48,914 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/Group_testDisabledTableMove/504a5498e0d439e6682c838bbf9a1724 2023-07-16 18:15:48,914 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/Group_testDisabledTableMove/b578839f3a20dd0d360d0ec8e00c1025 2023-07-16 18:15:48,917 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 504a5498e0d439e6682c838bbf9a1724 2023-07-16 18:15:48,927 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for b578839f3a20dd0d360d0ec8e00c1025 2023-07-16 18:15:48,927 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/Group_testDisabledTableMove/504a5498e0d439e6682c838bbf9a1724/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-16 18:15:48,928 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 504a5498e0d439e6682c838bbf9a1724; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11782107360, jitterRate=0.09729425609111786}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 18:15:48,928 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 504a5498e0d439e6682c838bbf9a1724: 2023-07-16 18:15:48,929 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,zzzzz,1689531348084.504a5498e0d439e6682c838bbf9a1724., pid=139, masterSystemTime=1689531348883 2023-07-16 18:15:48,930 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/Group_testDisabledTableMove/b578839f3a20dd0d360d0ec8e00c1025/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-16 18:15:48,930 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened b578839f3a20dd0d360d0ec8e00c1025; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10279680800, jitterRate=-0.04263012111186981}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 18:15:48,930 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for b578839f3a20dd0d360d0ec8e00c1025: 2023-07-16 18:15:48,931 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,zzzzz,1689531348084.504a5498e0d439e6682c838bbf9a1724. 2023-07-16 18:15:48,931 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,zzzzz,1689531348084.504a5498e0d439e6682c838bbf9a1724. 2023-07-16 18:15:48,931 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open Group_testDisabledTableMove,aaaaa,1689531348084.cf10f6f55144dd8d51d17c763f5b8b1d. 2023-07-16 18:15:48,931 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=137 updating hbase:meta row=504a5498e0d439e6682c838bbf9a1724, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,43375,1689531323422 2023-07-16 18:15:48,931 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689531348084.b578839f3a20dd0d360d0ec8e00c1025., pid=138, masterSystemTime=1689531348882 2023-07-16 18:15:48,931 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => cf10f6f55144dd8d51d17c763f5b8b1d, NAME => 'Group_testDisabledTableMove,aaaaa,1689531348084.cf10f6f55144dd8d51d17c763f5b8b1d.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'} 2023-07-16 18:15:48,931 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,zzzzz,1689531348084.504a5498e0d439e6682c838bbf9a1724.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689531348931"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689531348931"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689531348931"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689531348931"}]},"ts":"1689531348931"} 2023-07-16 18:15:48,931 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table Group_testDisabledTableMove cf10f6f55144dd8d51d17c763f5b8b1d 2023-07-16 18:15:48,932 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated Group_testDisabledTableMove,aaaaa,1689531348084.cf10f6f55144dd8d51d17c763f5b8b1d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 18:15:48,932 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for cf10f6f55144dd8d51d17c763f5b8b1d 2023-07-16 18:15:48,932 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for cf10f6f55144dd8d51d17c763f5b8b1d 2023-07-16 18:15:48,933 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689531348084.b578839f3a20dd0d360d0ec8e00c1025. 2023-07-16 18:15:48,933 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689531348084.b578839f3a20dd0d360d0ec8e00c1025. 2023-07-16 18:15:48,934 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=136 updating hbase:meta row=b578839f3a20dd0d360d0ec8e00c1025, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,44563,1689531327107 2023-07-16 18:15:48,934 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689531348084.b578839f3a20dd0d360d0ec8e00c1025.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689531348934"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689531348934"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689531348934"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689531348934"}]},"ts":"1689531348934"} 2023-07-16 18:15:48,935 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=139, resume processing ppid=137 2023-07-16 18:15:48,935 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=139, ppid=137, state=SUCCESS; OpenRegionProcedure 504a5498e0d439e6682c838bbf9a1724, server=jenkins-hbase4.apache.org,43375,1689531323422 in 202 msec 2023-07-16 18:15:48,936 INFO [StoreOpener-cf10f6f55144dd8d51d17c763f5b8b1d-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family f of region cf10f6f55144dd8d51d17c763f5b8b1d 2023-07-16 18:15:48,937 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=137, ppid=132, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=504a5498e0d439e6682c838bbf9a1724, ASSIGN in 363 msec 2023-07-16 18:15:48,937 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=138, resume processing ppid=136 2023-07-16 18:15:48,937 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=138, ppid=136, state=SUCCESS; OpenRegionProcedure b578839f3a20dd0d360d0ec8e00c1025, server=jenkins-hbase4.apache.org,44563,1689531327107 in 205 msec 2023-07-16 18:15:48,938 DEBUG [StoreOpener-cf10f6f55144dd8d51d17c763f5b8b1d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/Group_testDisabledTableMove/cf10f6f55144dd8d51d17c763f5b8b1d/f 2023-07-16 18:15:48,938 DEBUG [StoreOpener-cf10f6f55144dd8d51d17c763f5b8b1d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/Group_testDisabledTableMove/cf10f6f55144dd8d51d17c763f5b8b1d/f 2023-07-16 18:15:48,938 INFO [StoreOpener-cf10f6f55144dd8d51d17c763f5b8b1d-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region cf10f6f55144dd8d51d17c763f5b8b1d columnFamilyName f 2023-07-16 18:15:48,939 INFO [StoreOpener-cf10f6f55144dd8d51d17c763f5b8b1d-1] regionserver.HStore(310): Store=cf10f6f55144dd8d51d17c763f5b8b1d/f, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 18:15:48,939 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=136, ppid=132, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=b578839f3a20dd0d360d0ec8e00c1025, ASSIGN in 365 msec 2023-07-16 18:15:48,940 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/Group_testDisabledTableMove/cf10f6f55144dd8d51d17c763f5b8b1d 2023-07-16 18:15:48,940 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/Group_testDisabledTableMove/cf10f6f55144dd8d51d17c763f5b8b1d 2023-07-16 18:15:48,943 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for cf10f6f55144dd8d51d17c763f5b8b1d 2023-07-16 18:15:48,946 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/Group_testDisabledTableMove/cf10f6f55144dd8d51d17c763f5b8b1d/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-16 18:15:48,946 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened cf10f6f55144dd8d51d17c763f5b8b1d; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11497125600, jitterRate=0.07075326144695282}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 18:15:48,946 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for cf10f6f55144dd8d51d17c763f5b8b1d: 2023-07-16 18:15:48,947 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for Group_testDisabledTableMove,aaaaa,1689531348084.cf10f6f55144dd8d51d17c763f5b8b1d., pid=141, masterSystemTime=1689531348883 2023-07-16 18:15:48,948 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for Group_testDisabledTableMove,aaaaa,1689531348084.cf10f6f55144dd8d51d17c763f5b8b1d. 2023-07-16 18:15:48,948 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened Group_testDisabledTableMove,aaaaa,1689531348084.cf10f6f55144dd8d51d17c763f5b8b1d. 2023-07-16 18:15:48,949 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=134 updating hbase:meta row=cf10f6f55144dd8d51d17c763f5b8b1d, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,43375,1689531323422 2023-07-16 18:15:48,949 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"Group_testDisabledTableMove,aaaaa,1689531348084.cf10f6f55144dd8d51d17c763f5b8b1d.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689531348949"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689531348949"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689531348949"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689531348949"}]},"ts":"1689531348949"} 2023-07-16 18:15:48,951 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=141, resume processing ppid=134 2023-07-16 18:15:48,952 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=141, ppid=134, state=SUCCESS; OpenRegionProcedure cf10f6f55144dd8d51d17c763f5b8b1d, server=jenkins-hbase4.apache.org,43375,1689531323422 in 217 msec 2023-07-16 18:15:48,953 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=134, resume processing ppid=132 2023-07-16 18:15:48,953 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=134, ppid=132, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=cf10f6f55144dd8d51d17c763f5b8b1d, ASSIGN in 380 msec 2023-07-16 18:15:48,954 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=132, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-16 18:15:48,954 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689531348954"}]},"ts":"1689531348954"} 2023-07-16 18:15:48,955 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=ENABLED in hbase:meta 2023-07-16 18:15:48,957 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=132, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=Group_testDisabledTableMove execute state=CREATE_TABLE_POST_OPERATION 2023-07-16 18:15:48,958 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=132, state=SUCCESS; CreateTableProcedure table=Group_testDisabledTableMove in 873 msec 2023-07-16 18:15:49,192 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(1230): Checking to see if procedure is done pid=132 2023-07-16 18:15:49,192 INFO [Listener at localhost/38073] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:Group_testDisabledTableMove, procId: 132 completed 2023-07-16 18:15:49,192 DEBUG [Listener at localhost/38073] hbase.HBaseTestingUtility(3430): Waiting until all regions of table Group_testDisabledTableMove get assigned. Timeout = 60000ms 2023-07-16 18:15:49,193 INFO [Listener at localhost/38073] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 18:15:49,197 INFO [Listener at localhost/38073] hbase.HBaseTestingUtility(3484): All regions for table Group_testDisabledTableMove assigned to meta. Checking AM states. 2023-07-16 18:15:49,197 INFO [Listener at localhost/38073] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 18:15:49,197 INFO [Listener at localhost/38073] hbase.HBaseTestingUtility(3504): All regions for table Group_testDisabledTableMove assigned. 2023-07-16 18:15:49,198 INFO [Listener at localhost/38073] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 18:15:49,205 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=Group_testDisabledTableMove 2023-07-16 18:15:49,205 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-16 18:15:49,206 INFO [Listener at localhost/38073] client.HBaseAdmin$15(890): Started disable of Group_testDisabledTableMove 2023-07-16 18:15:49,206 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testDisabledTableMove 2023-07-16 18:15:49,207 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] procedure2.ProcedureExecutor(1029): Stored pid=143, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=Group_testDisabledTableMove 2023-07-16 18:15:49,216 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(1230): Checking to see if procedure is done pid=143 2023-07-16 18:15:49,216 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689531349216"}]},"ts":"1689531349216"} 2023-07-16 18:15:49,217 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=DISABLING in hbase:meta 2023-07-16 18:15:49,219 INFO [PEWorker-3] procedure.DisableTableProcedure(293): Set Group_testDisabledTableMove to state=DISABLING 2023-07-16 18:15:49,220 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=144, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=32768011504f3ab8fbbacad128ad4962, UNASSIGN}, {pid=145, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=cf10f6f55144dd8d51d17c763f5b8b1d, UNASSIGN}, {pid=146, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=590cf672ade80548ba38a35ea3382a7b, UNASSIGN}, {pid=147, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=b578839f3a20dd0d360d0ec8e00c1025, UNASSIGN}, {pid=148, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=504a5498e0d439e6682c838bbf9a1724, UNASSIGN}] 2023-07-16 18:15:49,223 INFO [PEWorker-3] procedure.MasterProcedureScheduler(727): Took xlock for pid=148, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=504a5498e0d439e6682c838bbf9a1724, UNASSIGN 2023-07-16 18:15:49,224 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=148 updating hbase:meta row=504a5498e0d439e6682c838bbf9a1724, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,43375,1689531323422 2023-07-16 18:15:49,224 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,zzzzz,1689531348084.504a5498e0d439e6682c838bbf9a1724.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689531349224"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689531349224"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689531349224"}]},"ts":"1689531349224"} 2023-07-16 18:15:49,225 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=149, ppid=148, state=RUNNABLE; CloseRegionProcedure 504a5498e0d439e6682c838bbf9a1724, server=jenkins-hbase4.apache.org,43375,1689531323422}] 2023-07-16 18:15:49,229 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=145, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=cf10f6f55144dd8d51d17c763f5b8b1d, UNASSIGN 2023-07-16 18:15:49,230 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=146, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=590cf672ade80548ba38a35ea3382a7b, UNASSIGN 2023-07-16 18:15:49,230 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=144, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=32768011504f3ab8fbbacad128ad4962, UNASSIGN 2023-07-16 18:15:49,230 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=147, ppid=143, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=b578839f3a20dd0d360d0ec8e00c1025, UNASSIGN 2023-07-16 18:15:49,232 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=145 updating hbase:meta row=cf10f6f55144dd8d51d17c763f5b8b1d, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,43375,1689531323422 2023-07-16 18:15:49,232 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=144 updating hbase:meta row=32768011504f3ab8fbbacad128ad4962, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,43375,1689531323422 2023-07-16 18:15:49,232 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=146 updating hbase:meta row=590cf672ade80548ba38a35ea3382a7b, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,44563,1689531327107 2023-07-16 18:15:49,233 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,,1689531348084.32768011504f3ab8fbbacad128ad4962.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689531349232"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689531349232"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689531349232"}]},"ts":"1689531349232"} 2023-07-16 18:15:49,233 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689531348084.590cf672ade80548ba38a35ea3382a7b.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689531349232"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689531349232"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689531349232"}]},"ts":"1689531349232"} 2023-07-16 18:15:49,232 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=147 updating hbase:meta row=b578839f3a20dd0d360d0ec8e00c1025, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,44563,1689531327107 2023-07-16 18:15:49,233 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689531348084.b578839f3a20dd0d360d0ec8e00c1025.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689531349232"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689531349232"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689531349232"}]},"ts":"1689531349232"} 2023-07-16 18:15:49,233 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"Group_testDisabledTableMove,aaaaa,1689531348084.cf10f6f55144dd8d51d17c763f5b8b1d.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689531349232"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689531349232"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689531349232"}]},"ts":"1689531349232"} 2023-07-16 18:15:49,234 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=150, ppid=144, state=RUNNABLE; CloseRegionProcedure 32768011504f3ab8fbbacad128ad4962, server=jenkins-hbase4.apache.org,43375,1689531323422}] 2023-07-16 18:15:49,236 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=151, ppid=147, state=RUNNABLE; CloseRegionProcedure b578839f3a20dd0d360d0ec8e00c1025, server=jenkins-hbase4.apache.org,44563,1689531327107}] 2023-07-16 18:15:49,236 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=152, ppid=146, state=RUNNABLE; CloseRegionProcedure 590cf672ade80548ba38a35ea3382a7b, server=jenkins-hbase4.apache.org,44563,1689531327107}] 2023-07-16 18:15:49,237 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=153, ppid=145, state=RUNNABLE; CloseRegionProcedure cf10f6f55144dd8d51d17c763f5b8b1d, server=jenkins-hbase4.apache.org,43375,1689531323422}] 2023-07-16 18:15:49,317 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(1230): Checking to see if procedure is done pid=143 2023-07-16 18:15:49,381 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 504a5498e0d439e6682c838bbf9a1724 2023-07-16 18:15:49,382 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 504a5498e0d439e6682c838bbf9a1724, disabling compactions & flushes 2023-07-16 18:15:49,382 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,zzzzz,1689531348084.504a5498e0d439e6682c838bbf9a1724. 2023-07-16 18:15:49,382 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,zzzzz,1689531348084.504a5498e0d439e6682c838bbf9a1724. 2023-07-16 18:15:49,382 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,zzzzz,1689531348084.504a5498e0d439e6682c838bbf9a1724. after waiting 0 ms 2023-07-16 18:15:49,382 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,zzzzz,1689531348084.504a5498e0d439e6682c838bbf9a1724. 2023-07-16 18:15:49,386 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/Group_testDisabledTableMove/504a5498e0d439e6682c838bbf9a1724/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-16 18:15:49,386 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,zzzzz,1689531348084.504a5498e0d439e6682c838bbf9a1724. 2023-07-16 18:15:49,386 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 504a5498e0d439e6682c838bbf9a1724: 2023-07-16 18:15:49,388 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close b578839f3a20dd0d360d0ec8e00c1025 2023-07-16 18:15:49,388 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 504a5498e0d439e6682c838bbf9a1724 2023-07-16 18:15:49,389 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing b578839f3a20dd0d360d0ec8e00c1025, disabling compactions & flushes 2023-07-16 18:15:49,389 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689531348084.b578839f3a20dd0d360d0ec8e00c1025. 2023-07-16 18:15:49,389 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close cf10f6f55144dd8d51d17c763f5b8b1d 2023-07-16 18:15:49,389 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689531348084.b578839f3a20dd0d360d0ec8e00c1025. 2023-07-16 18:15:49,390 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing cf10f6f55144dd8d51d17c763f5b8b1d, disabling compactions & flushes 2023-07-16 18:15:49,390 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689531348084.b578839f3a20dd0d360d0ec8e00c1025. after waiting 0 ms 2023-07-16 18:15:49,390 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=148 updating hbase:meta row=504a5498e0d439e6682c838bbf9a1724, regionState=CLOSED 2023-07-16 18:15:49,390 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,aaaaa,1689531348084.cf10f6f55144dd8d51d17c763f5b8b1d. 2023-07-16 18:15:49,390 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689531348084.b578839f3a20dd0d360d0ec8e00c1025. 2023-07-16 18:15:49,390 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,zzzzz,1689531348084.504a5498e0d439e6682c838bbf9a1724.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689531349390"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689531349390"}]},"ts":"1689531349390"} 2023-07-16 18:15:49,390 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,aaaaa,1689531348084.cf10f6f55144dd8d51d17c763f5b8b1d. 2023-07-16 18:15:49,390 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,aaaaa,1689531348084.cf10f6f55144dd8d51d17c763f5b8b1d. after waiting 0 ms 2023-07-16 18:15:49,390 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,aaaaa,1689531348084.cf10f6f55144dd8d51d17c763f5b8b1d. 2023-07-16 18:15:49,393 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=149, resume processing ppid=148 2023-07-16 18:15:49,393 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=149, ppid=148, state=SUCCESS; CloseRegionProcedure 504a5498e0d439e6682c838bbf9a1724, server=jenkins-hbase4.apache.org,43375,1689531323422 in 166 msec 2023-07-16 18:15:49,394 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/Group_testDisabledTableMove/b578839f3a20dd0d360d0ec8e00c1025/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-16 18:15:49,394 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/Group_testDisabledTableMove/cf10f6f55144dd8d51d17c763f5b8b1d/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-16 18:15:49,395 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=148, ppid=143, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=504a5498e0d439e6682c838bbf9a1724, UNASSIGN in 173 msec 2023-07-16 18:15:49,395 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689531348084.b578839f3a20dd0d360d0ec8e00c1025. 2023-07-16 18:15:49,395 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for b578839f3a20dd0d360d0ec8e00c1025: 2023-07-16 18:15:49,395 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,aaaaa,1689531348084.cf10f6f55144dd8d51d17c763f5b8b1d. 2023-07-16 18:15:49,395 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for cf10f6f55144dd8d51d17c763f5b8b1d: 2023-07-16 18:15:49,396 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed b578839f3a20dd0d360d0ec8e00c1025 2023-07-16 18:15:49,396 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 590cf672ade80548ba38a35ea3382a7b 2023-07-16 18:15:49,397 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 590cf672ade80548ba38a35ea3382a7b, disabling compactions & flushes 2023-07-16 18:15:49,397 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,i\xBF\x14i\xBE,1689531348084.590cf672ade80548ba38a35ea3382a7b. 2023-07-16 18:15:49,397 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1689531348084.590cf672ade80548ba38a35ea3382a7b. 2023-07-16 18:15:49,397 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,i\xBF\x14i\xBE,1689531348084.590cf672ade80548ba38a35ea3382a7b. after waiting 0 ms 2023-07-16 18:15:49,397 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,i\xBF\x14i\xBE,1689531348084.590cf672ade80548ba38a35ea3382a7b. 2023-07-16 18:15:49,397 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=147 updating hbase:meta row=b578839f3a20dd0d360d0ec8e00c1025, regionState=CLOSED 2023-07-16 18:15:49,398 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689531348084.b578839f3a20dd0d360d0ec8e00c1025.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689531349397"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689531349397"}]},"ts":"1689531349397"} 2023-07-16 18:15:49,398 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed cf10f6f55144dd8d51d17c763f5b8b1d 2023-07-16 18:15:49,398 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 32768011504f3ab8fbbacad128ad4962 2023-07-16 18:15:49,399 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 32768011504f3ab8fbbacad128ad4962, disabling compactions & flushes 2023-07-16 18:15:49,399 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region Group_testDisabledTableMove,,1689531348084.32768011504f3ab8fbbacad128ad4962. 2023-07-16 18:15:49,399 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on Group_testDisabledTableMove,,1689531348084.32768011504f3ab8fbbacad128ad4962. 2023-07-16 18:15:49,399 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on Group_testDisabledTableMove,,1689531348084.32768011504f3ab8fbbacad128ad4962. after waiting 0 ms 2023-07-16 18:15:49,399 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region Group_testDisabledTableMove,,1689531348084.32768011504f3ab8fbbacad128ad4962. 2023-07-16 18:15:49,399 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=145 updating hbase:meta row=cf10f6f55144dd8d51d17c763f5b8b1d, regionState=CLOSED 2023-07-16 18:15:49,399 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,aaaaa,1689531348084.cf10f6f55144dd8d51d17c763f5b8b1d.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689531349399"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689531349399"}]},"ts":"1689531349399"} 2023-07-16 18:15:49,402 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/Group_testDisabledTableMove/590cf672ade80548ba38a35ea3382a7b/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-16 18:15:49,403 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,i\xBF\x14i\xBE,1689531348084.590cf672ade80548ba38a35ea3382a7b. 2023-07-16 18:15:49,403 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 590cf672ade80548ba38a35ea3382a7b: 2023-07-16 18:15:49,403 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=151, resume processing ppid=147 2023-07-16 18:15:49,403 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=151, ppid=147, state=SUCCESS; CloseRegionProcedure b578839f3a20dd0d360d0ec8e00c1025, server=jenkins-hbase4.apache.org,44563,1689531327107 in 165 msec 2023-07-16 18:15:49,404 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=153, resume processing ppid=145 2023-07-16 18:15:49,404 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=153, ppid=145, state=SUCCESS; CloseRegionProcedure cf10f6f55144dd8d51d17c763f5b8b1d, server=jenkins-hbase4.apache.org,43375,1689531323422 in 164 msec 2023-07-16 18:15:49,404 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 590cf672ade80548ba38a35ea3382a7b 2023-07-16 18:15:49,405 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/Group_testDisabledTableMove/32768011504f3ab8fbbacad128ad4962/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-16 18:15:49,405 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed Group_testDisabledTableMove,,1689531348084.32768011504f3ab8fbbacad128ad4962. 2023-07-16 18:15:49,406 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 32768011504f3ab8fbbacad128ad4962: 2023-07-16 18:15:49,415 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=146 updating hbase:meta row=590cf672ade80548ba38a35ea3382a7b, regionState=CLOSED 2023-07-16 18:15:49,415 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=147, ppid=143, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=b578839f3a20dd0d360d0ec8e00c1025, UNASSIGN in 183 msec 2023-07-16 18:15:49,415 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689531348084.590cf672ade80548ba38a35ea3382a7b.","families":{"info":[{"qualifier":"regioninfo","vlen":71,"tag":[],"timestamp":"1689531349415"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689531349415"}]},"ts":"1689531349415"} 2023-07-16 18:15:49,416 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=145, ppid=143, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=cf10f6f55144dd8d51d17c763f5b8b1d, UNASSIGN in 184 msec 2023-07-16 18:15:49,416 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 32768011504f3ab8fbbacad128ad4962 2023-07-16 18:15:49,416 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=144 updating hbase:meta row=32768011504f3ab8fbbacad128ad4962, regionState=CLOSED 2023-07-16 18:15:49,416 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"Group_testDisabledTableMove,,1689531348084.32768011504f3ab8fbbacad128ad4962.","families":{"info":[{"qualifier":"regioninfo","vlen":66,"tag":[],"timestamp":"1689531349416"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689531349416"}]},"ts":"1689531349416"} 2023-07-16 18:15:49,418 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=152, resume processing ppid=146 2023-07-16 18:15:49,418 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=152, ppid=146, state=SUCCESS; CloseRegionProcedure 590cf672ade80548ba38a35ea3382a7b, server=jenkins-hbase4.apache.org,44563,1689531327107 in 180 msec 2023-07-16 18:15:49,419 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=146, ppid=143, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=590cf672ade80548ba38a35ea3382a7b, UNASSIGN in 198 msec 2023-07-16 18:15:49,419 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=150, resume processing ppid=144 2023-07-16 18:15:49,419 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=150, ppid=144, state=SUCCESS; CloseRegionProcedure 32768011504f3ab8fbbacad128ad4962, server=jenkins-hbase4.apache.org,43375,1689531323422 in 184 msec 2023-07-16 18:15:49,420 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=144, resume processing ppid=143 2023-07-16 18:15:49,420 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=144, ppid=143, state=SUCCESS; TransitRegionStateProcedure table=Group_testDisabledTableMove, region=32768011504f3ab8fbbacad128ad4962, UNASSIGN in 199 msec 2023-07-16 18:15:49,421 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689531349421"}]},"ts":"1689531349421"} 2023-07-16 18:15:49,422 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=Group_testDisabledTableMove, state=DISABLED in hbase:meta 2023-07-16 18:15:49,424 INFO [PEWorker-4] procedure.DisableTableProcedure(305): Set Group_testDisabledTableMove to state=DISABLED 2023-07-16 18:15:49,425 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=143, state=SUCCESS; DisableTableProcedure table=Group_testDisabledTableMove in 218 msec 2023-07-16 18:15:49,473 DEBUG [HBase-Metrics2-1] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'testRename' 2023-07-16 18:15:49,518 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(1230): Checking to see if procedure is done pid=143 2023-07-16 18:15:49,518 INFO [Listener at localhost/38073] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:Group_testDisabledTableMove, procId: 143 completed 2023-07-16 18:15:49,518 INFO [Listener at localhost/38073] rsgroup.TestRSGroupsAdmin1(370): Moving table Group_testDisabledTableMove to Group_testDisabledTableMove_1913902319 2023-07-16 18:15:49,520 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [Group_testDisabledTableMove] to rsgroup Group_testDisabledTableMove_1913902319 2023-07-16 18:15:49,522 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 18:15:49,522 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_1913902319 2023-07-16 18:15:49,523 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 18:15:49,523 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-16 18:15:49,525 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminServer(336): Skipping move regions because the table Group_testDisabledTableMove is disabled 2023-07-16 18:15:49,525 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testDisabledTableMove_1913902319, current retry=0 2023-07-16 18:15:49,525 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminServer(369): All regions from table(s) [Group_testDisabledTableMove] moved to target group Group_testDisabledTableMove_1913902319. 2023-07-16 18:15:49,525 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 18:15:49,527 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 18:15:49,528 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 18:15:49,529 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=Group_testDisabledTableMove 2023-07-16 18:15:49,530 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-16 18:15:49,531 INFO [Listener at localhost/38073] client.HBaseAdmin$15(890): Started disable of Group_testDisabledTableMove 2023-07-16 18:15:49,531 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable Group_testDisabledTableMove 2023-07-16 18:15:49,532 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.TableNotEnabledException: Group_testDisabledTableMove at org.apache.hadoop.hbase.master.procedure.AbstractStateMachineTableProcedure.preflightChecks(AbstractStateMachineTableProcedure.java:163) at org.apache.hadoop.hbase.master.procedure.DisableTableProcedure.(DisableTableProcedure.java:78) at org.apache.hadoop.hbase.master.HMaster$11.run(HMaster.java:2429) at org.apache.hadoop.hbase.master.procedure.MasterProcedureUtil.submitProcedure(MasterProcedureUtil.java:132) at org.apache.hadoop.hbase.master.HMaster.disableTable(HMaster.java:2413) at org.apache.hadoop.hbase.master.MasterRpcServices.disableTable(MasterRpcServices.java:787) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 18:15:49,532 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] ipc.CallRunner(144): callId: 921 service: MasterService methodName: DisableTable size: 88 connection: 172.31.14.131:45244 deadline: 1689531409531, exception=org.apache.hadoop.hbase.TableNotEnabledException: Group_testDisabledTableMove 2023-07-16 18:15:49,532 DEBUG [Listener at localhost/38073] hbase.HBaseTestingUtility(1826): Table: Group_testDisabledTableMove already disabled, so just deleting it. 2023-07-16 18:15:49,533 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete Group_testDisabledTableMove 2023-07-16 18:15:49,534 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] procedure2.ProcedureExecutor(1029): Stored pid=155, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-16 18:15:49,535 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=155, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-16 18:15:49,536 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'Group_testDisabledTableMove' from rsgroup 'Group_testDisabledTableMove_1913902319' 2023-07-16 18:15:49,536 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=155, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-16 18:15:49,537 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 18:15:49,538 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_1913902319 2023-07-16 18:15:49,538 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 18:15:49,539 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-16 18:15:49,544 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/default/Group_testDisabledTableMove/32768011504f3ab8fbbacad128ad4962 2023-07-16 18:15:49,544 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/default/Group_testDisabledTableMove/590cf672ade80548ba38a35ea3382a7b 2023-07-16 18:15:49,544 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/default/Group_testDisabledTableMove/b578839f3a20dd0d360d0ec8e00c1025 2023-07-16 18:15:49,544 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/default/Group_testDisabledTableMove/504a5498e0d439e6682c838bbf9a1724 2023-07-16 18:15:49,544 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/default/Group_testDisabledTableMove/cf10f6f55144dd8d51d17c763f5b8b1d 2023-07-16 18:15:49,544 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(1230): Checking to see if procedure is done pid=155 2023-07-16 18:15:49,546 DEBUG [HFileArchiver-3] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/default/Group_testDisabledTableMove/cf10f6f55144dd8d51d17c763f5b8b1d/f, FileablePath, hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/default/Group_testDisabledTableMove/cf10f6f55144dd8d51d17c763f5b8b1d/recovered.edits] 2023-07-16 18:15:49,546 DEBUG [HFileArchiver-5] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/default/Group_testDisabledTableMove/32768011504f3ab8fbbacad128ad4962/f, FileablePath, hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/default/Group_testDisabledTableMove/32768011504f3ab8fbbacad128ad4962/recovered.edits] 2023-07-16 18:15:49,546 DEBUG [HFileArchiver-1] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/default/Group_testDisabledTableMove/590cf672ade80548ba38a35ea3382a7b/f, FileablePath, hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/default/Group_testDisabledTableMove/590cf672ade80548ba38a35ea3382a7b/recovered.edits] 2023-07-16 18:15:49,546 DEBUG [HFileArchiver-2] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/default/Group_testDisabledTableMove/b578839f3a20dd0d360d0ec8e00c1025/f, FileablePath, hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/default/Group_testDisabledTableMove/b578839f3a20dd0d360d0ec8e00c1025/recovered.edits] 2023-07-16 18:15:49,546 DEBUG [HFileArchiver-7] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/default/Group_testDisabledTableMove/504a5498e0d439e6682c838bbf9a1724/f, FileablePath, hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/default/Group_testDisabledTableMove/504a5498e0d439e6682c838bbf9a1724/recovered.edits] 2023-07-16 18:15:49,554 DEBUG [HFileArchiver-5] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/default/Group_testDisabledTableMove/32768011504f3ab8fbbacad128ad4962/recovered.edits/4.seqid to hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/archive/data/default/Group_testDisabledTableMove/32768011504f3ab8fbbacad128ad4962/recovered.edits/4.seqid 2023-07-16 18:15:49,555 DEBUG [HFileArchiver-2] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/default/Group_testDisabledTableMove/b578839f3a20dd0d360d0ec8e00c1025/recovered.edits/4.seqid to hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/archive/data/default/Group_testDisabledTableMove/b578839f3a20dd0d360d0ec8e00c1025/recovered.edits/4.seqid 2023-07-16 18:15:49,555 DEBUG [HFileArchiver-3] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/default/Group_testDisabledTableMove/cf10f6f55144dd8d51d17c763f5b8b1d/recovered.edits/4.seqid to hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/archive/data/default/Group_testDisabledTableMove/cf10f6f55144dd8d51d17c763f5b8b1d/recovered.edits/4.seqid 2023-07-16 18:15:49,555 DEBUG [HFileArchiver-1] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/default/Group_testDisabledTableMove/590cf672ade80548ba38a35ea3382a7b/recovered.edits/4.seqid to hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/archive/data/default/Group_testDisabledTableMove/590cf672ade80548ba38a35ea3382a7b/recovered.edits/4.seqid 2023-07-16 18:15:49,555 DEBUG [HFileArchiver-5] backup.HFileArchiver(596): Deleted hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/default/Group_testDisabledTableMove/32768011504f3ab8fbbacad128ad4962 2023-07-16 18:15:49,556 DEBUG [HFileArchiver-2] backup.HFileArchiver(596): Deleted hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/default/Group_testDisabledTableMove/b578839f3a20dd0d360d0ec8e00c1025 2023-07-16 18:15:49,556 DEBUG [HFileArchiver-3] backup.HFileArchiver(596): Deleted hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/default/Group_testDisabledTableMove/cf10f6f55144dd8d51d17c763f5b8b1d 2023-07-16 18:15:49,556 DEBUG [HFileArchiver-7] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/default/Group_testDisabledTableMove/504a5498e0d439e6682c838bbf9a1724/recovered.edits/4.seqid to hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/archive/data/default/Group_testDisabledTableMove/504a5498e0d439e6682c838bbf9a1724/recovered.edits/4.seqid 2023-07-16 18:15:49,556 DEBUG [HFileArchiver-1] backup.HFileArchiver(596): Deleted hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/default/Group_testDisabledTableMove/590cf672ade80548ba38a35ea3382a7b 2023-07-16 18:15:49,557 DEBUG [HFileArchiver-7] backup.HFileArchiver(596): Deleted hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/.tmp/data/default/Group_testDisabledTableMove/504a5498e0d439e6682c838bbf9a1724 2023-07-16 18:15:49,557 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived Group_testDisabledTableMove regions 2023-07-16 18:15:49,559 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=155, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-16 18:15:49,561 WARN [PEWorker-1] procedure.DeleteTableProcedure(384): Deleting some vestigial 5 rows of Group_testDisabledTableMove from hbase:meta 2023-07-16 18:15:49,567 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(421): Removing 'Group_testDisabledTableMove' descriptor. 2023-07-16 18:15:49,568 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=155, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-16 18:15:49,568 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(411): Removing 'Group_testDisabledTableMove' from region states. 2023-07-16 18:15:49,569 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,,1689531348084.32768011504f3ab8fbbacad128ad4962.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689531349568"}]},"ts":"9223372036854775807"} 2023-07-16 18:15:49,569 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,aaaaa,1689531348084.cf10f6f55144dd8d51d17c763f5b8b1d.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689531349568"}]},"ts":"9223372036854775807"} 2023-07-16 18:15:49,569 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,i\\xBF\\x14i\\xBE,1689531348084.590cf672ade80548ba38a35ea3382a7b.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689531349568"}]},"ts":"9223372036854775807"} 2023-07-16 18:15:49,569 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,r\\x1C\\xC7r\\x1B,1689531348084.b578839f3a20dd0d360d0ec8e00c1025.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689531349568"}]},"ts":"9223372036854775807"} 2023-07-16 18:15:49,569 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove,zzzzz,1689531348084.504a5498e0d439e6682c838bbf9a1724.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689531349568"}]},"ts":"9223372036854775807"} 2023-07-16 18:15:49,571 INFO [PEWorker-1] hbase.MetaTableAccessor(1788): Deleted 5 regions from META 2023-07-16 18:15:49,571 DEBUG [PEWorker-1] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 32768011504f3ab8fbbacad128ad4962, NAME => 'Group_testDisabledTableMove,,1689531348084.32768011504f3ab8fbbacad128ad4962.', STARTKEY => '', ENDKEY => 'aaaaa'}, {ENCODED => cf10f6f55144dd8d51d17c763f5b8b1d, NAME => 'Group_testDisabledTableMove,aaaaa,1689531348084.cf10f6f55144dd8d51d17c763f5b8b1d.', STARTKEY => 'aaaaa', ENDKEY => 'i\xBF\x14i\xBE'}, {ENCODED => 590cf672ade80548ba38a35ea3382a7b, NAME => 'Group_testDisabledTableMove,i\xBF\x14i\xBE,1689531348084.590cf672ade80548ba38a35ea3382a7b.', STARTKEY => 'i\xBF\x14i\xBE', ENDKEY => 'r\x1C\xC7r\x1B'}, {ENCODED => b578839f3a20dd0d360d0ec8e00c1025, NAME => 'Group_testDisabledTableMove,r\x1C\xC7r\x1B,1689531348084.b578839f3a20dd0d360d0ec8e00c1025.', STARTKEY => 'r\x1C\xC7r\x1B', ENDKEY => 'zzzzz'}, {ENCODED => 504a5498e0d439e6682c838bbf9a1724, NAME => 'Group_testDisabledTableMove,zzzzz,1689531348084.504a5498e0d439e6682c838bbf9a1724.', STARTKEY => 'zzzzz', ENDKEY => ''}] 2023-07-16 18:15:49,571 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(415): Marking 'Group_testDisabledTableMove' as deleted. 2023-07-16 18:15:49,571 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"Group_testDisabledTableMove","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689531349571"}]},"ts":"9223372036854775807"} 2023-07-16 18:15:49,572 INFO [PEWorker-1] hbase.MetaTableAccessor(1658): Deleted table Group_testDisabledTableMove state from META 2023-07-16 18:15:49,575 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(130): Finished pid=155, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=Group_testDisabledTableMove 2023-07-16 18:15:49,576 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=155, state=SUCCESS; DeleteTableProcedure table=Group_testDisabledTableMove in 42 msec 2023-07-16 18:15:49,646 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(1230): Checking to see if procedure is done pid=155 2023-07-16 18:15:49,646 INFO [Listener at localhost/38073] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:Group_testDisabledTableMove, procId: 155 completed 2023-07-16 18:15:49,649 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 18:15:49,649 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 18:15:49,650 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-16 18:15:49,650 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-16 18:15:49,650 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 18:15:49,651 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:33809, jenkins-hbase4.apache.org:41927] to rsgroup default 2023-07-16 18:15:49,653 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 18:15:49,654 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_testDisabledTableMove_1913902319 2023-07-16 18:15:49,654 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 18:15:49,655 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-16 18:15:49,660 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminServer(286): Moving 0 region(s) to group Group_testDisabledTableMove_1913902319, current retry=0 2023-07-16 18:15:49,660 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminServer(261): All regions from [jenkins-hbase4.apache.org,33809,1689531323219, jenkins-hbase4.apache.org,41927,1689531323590] are moved back to Group_testDisabledTableMove_1913902319 2023-07-16 18:15:49,660 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminServer(438): Move servers done: Group_testDisabledTableMove_1913902319 => default 2023-07-16 18:15:49,660 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 18:15:49,661 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_testDisabledTableMove_1913902319 2023-07-16 18:15:49,667 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 18:15:49,668 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 18:15:49,668 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-16 18:15:49,671 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-16 18:15:49,672 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-16 18:15:49,673 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-16 18:15:49,673 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 18:15:49,673 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-16 18:15:49,674 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 18:15:49,674 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-16 18:15:49,678 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 18:15:49,678 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-16 18:15:49,679 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-16 18:15:49,681 INFO [Listener at localhost/38073] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-16 18:15:49,682 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-16 18:15:49,683 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 18:15:49,684 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 18:15:49,685 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-16 18:15:49,686 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-16 18:15:49,688 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 18:15:49,689 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 18:15:49,690 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:45445] to rsgroup master 2023-07-16 18:15:49,690 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45445 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 18:15:49,691 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] ipc.CallRunner(144): callId: 955 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:45244 deadline: 1689532549690, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45445 is either offline or it does not exist. 2023-07-16 18:15:49,691 WARN [Listener at localhost/38073] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45445 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor57.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45445 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-16 18:15:49,693 INFO [Listener at localhost/38073] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 18:15:49,693 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 18:15:49,693 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 18:15:49,694 INFO [Listener at localhost/38073] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33809, jenkins-hbase4.apache.org:41927, jenkins-hbase4.apache.org:43375, jenkins-hbase4.apache.org:44563], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-16 18:15:49,694 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-16 18:15:49,694 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 18:15:49,716 INFO [Listener at localhost/38073] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testDisabledTableMove Thread=514 (was 512) Potentially hanging thread: hconnection-0x67375dfc-shared-pool-25 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x1a923ff9-shared-pool-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_239416957_17 at /127.0.0.1:41078 [Waiting for operation #4] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1709783454_17 at /127.0.0.1:48068 [Waiting for operation #6] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=806 (was 779) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=409 (was 409), ProcessCount=171 (was 173), AvailableMemoryMB=5054 (was 2858) - AvailableMemoryMB LEAK? - 2023-07-16 18:15:49,716 WARN [Listener at localhost/38073] hbase.ResourceChecker(130): Thread=514 is superior to 500 2023-07-16 18:15:49,734 INFO [Listener at localhost/38073] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testRSGroupListDoesNotContainFailedTableCreation Thread=514, OpenFileDescriptor=806, MaxFileDescriptor=60000, SystemLoadAverage=409, ProcessCount=171, AvailableMemoryMB=5053 2023-07-16 18:15:49,734 WARN [Listener at localhost/38073] hbase.ResourceChecker(130): Thread=514 is superior to 500 2023-07-16 18:15:49,734 INFO [Listener at localhost/38073] rsgroup.TestRSGroupsBase(132): testRSGroupListDoesNotContainFailedTableCreation 2023-07-16 18:15:49,738 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 18:15:49,738 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 18:15:49,738 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-16 18:15:49,738 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-16 18:15:49,739 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 18:15:49,739 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-16 18:15:49,739 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 18:15:49,740 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-16 18:15:49,743 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 18:15:49,743 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-16 18:15:49,745 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-16 18:15:49,748 INFO [Listener at localhost/38073] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-16 18:15:49,748 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-16 18:15:49,750 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 18:15:49,750 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 18:15:49,752 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-16 18:15:49,756 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-16 18:15:49,758 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 18:15:49,758 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 18:15:49,760 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:45445] to rsgroup master 2023-07-16 18:15:49,760 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45445 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 18:15:49,760 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] ipc.CallRunner(144): callId: 983 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:45244 deadline: 1689532549760, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45445 is either offline or it does not exist. 2023-07-16 18:15:49,761 WARN [Listener at localhost/38073] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45445 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor57.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:45445 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-16 18:15:49,763 INFO [Listener at localhost/38073] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 18:15:49,764 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 18:15:49,764 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 18:15:49,764 INFO [Listener at localhost/38073] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:33809, jenkins-hbase4.apache.org:41927, jenkins-hbase4.apache.org:43375, jenkins-hbase4.apache.org:44563], Tables:[hbase:meta, hbase:namespace, unmovedTable, hbase:rsgroup, testRename], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-16 18:15:49,765 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-16 18:15:49,765 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=45445] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 18:15:49,765 INFO [Listener at localhost/38073] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-07-16 18:15:49,766 INFO [Listener at localhost/38073] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-16 18:15:49,766 DEBUG [Listener at localhost/38073] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x2c40ba5c to 127.0.0.1:53498 2023-07-16 18:15:49,766 DEBUG [Listener at localhost/38073] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-16 18:15:49,770 DEBUG [Listener at localhost/38073] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-16 18:15:49,770 DEBUG [Listener at localhost/38073] util.JVMClusterUtil(257): Found active master hash=537395340, stopped=false 2023-07-16 18:15:49,771 DEBUG [Listener at localhost/38073] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-16 18:15:49,771 DEBUG [Listener at localhost/38073] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-16 18:15:49,771 INFO [Listener at localhost/38073] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,45445,1689531321197 2023-07-16 18:15:49,772 DEBUG [Listener at localhost/38073-EventThread] zookeeper.ZKWatcher(600): regionserver:33809-0x1016f588ace0001, quorum=127.0.0.1:53498, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-16 18:15:49,772 DEBUG [Listener at localhost/38073-EventThread] zookeeper.ZKWatcher(600): regionserver:44563-0x1016f588ace000b, quorum=127.0.0.1:53498, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-16 18:15:49,772 DEBUG [Listener at localhost/38073-EventThread] zookeeper.ZKWatcher(600): master:45445-0x1016f588ace0000, quorum=127.0.0.1:53498, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-16 18:15:49,772 DEBUG [Listener at localhost/38073-EventThread] zookeeper.ZKWatcher(600): regionserver:41927-0x1016f588ace0003, quorum=127.0.0.1:53498, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-16 18:15:49,772 DEBUG [Listener at localhost/38073-EventThread] zookeeper.ZKWatcher(600): regionserver:43375-0x1016f588ace0002, quorum=127.0.0.1:53498, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-16 18:15:49,773 DEBUG [Listener at localhost/38073-EventThread] zookeeper.ZKWatcher(600): master:45445-0x1016f588ace0000, quorum=127.0.0.1:53498, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-16 18:15:49,773 INFO [Listener at localhost/38073] procedure2.ProcedureExecutor(629): Stopping 2023-07-16 18:15:49,773 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:45445-0x1016f588ace0000, quorum=127.0.0.1:53498, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-16 18:15:49,773 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:41927-0x1016f588ace0003, quorum=127.0.0.1:53498, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-16 18:15:49,773 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:44563-0x1016f588ace000b, quorum=127.0.0.1:53498, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-16 18:15:49,773 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:33809-0x1016f588ace0001, quorum=127.0.0.1:53498, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-16 18:15:49,774 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:43375-0x1016f588ace0002, quorum=127.0.0.1:53498, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-16 18:15:49,774 DEBUG [Listener at localhost/38073] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x4c0e5421 to 127.0.0.1:53498 2023-07-16 18:15:49,774 DEBUG [Listener at localhost/38073] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-16 18:15:49,774 INFO [Listener at localhost/38073] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,33809,1689531323219' ***** 2023-07-16 18:15:49,774 INFO [Listener at localhost/38073] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-16 18:15:49,774 INFO [RS:0;jenkins-hbase4:33809] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-16 18:15:49,775 INFO [Listener at localhost/38073] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,43375,1689531323422' ***** 2023-07-16 18:15:49,778 INFO [Listener at localhost/38073] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-16 18:15:49,778 INFO [Listener at localhost/38073] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,41927,1689531323590' ***** 2023-07-16 18:15:49,778 INFO [RS:1;jenkins-hbase4:43375] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-16 18:15:49,778 INFO [Listener at localhost/38073] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-16 18:15:49,778 INFO [RS:2;jenkins-hbase4:41927] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-16 18:15:49,778 INFO [Listener at localhost/38073] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,44563,1689531327107' ***** 2023-07-16 18:15:49,778 INFO [Listener at localhost/38073] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-16 18:15:49,779 INFO [RS:3;jenkins-hbase4:44563] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-16 18:15:49,794 INFO [RS:0;jenkins-hbase4:33809] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@1cb98983{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-16 18:15:49,794 INFO [RS:3;jenkins-hbase4:44563] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@4e85f979{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-16 18:15:49,794 INFO [RS:1;jenkins-hbase4:43375] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@5612588a{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-16 18:15:49,794 INFO [RS:2;jenkins-hbase4:41927] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@325896d4{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-16 18:15:49,799 INFO [RS:3;jenkins-hbase4:44563] server.AbstractConnector(383): Stopped ServerConnector@4fd2dab2{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-16 18:15:49,799 INFO [RS:2;jenkins-hbase4:41927] server.AbstractConnector(383): Stopped ServerConnector@5248e24{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-16 18:15:49,799 INFO [RS:3;jenkins-hbase4:44563] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-16 18:15:49,799 INFO [RS:2;jenkins-hbase4:41927] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-16 18:15:49,799 INFO [RS:1;jenkins-hbase4:43375] server.AbstractConnector(383): Stopped ServerConnector@587ce315{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-16 18:15:49,800 INFO [RS:0;jenkins-hbase4:33809] server.AbstractConnector(383): Stopped ServerConnector@38645c85{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-16 18:15:49,800 INFO [RS:2;jenkins-hbase4:41927] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@415a80d4{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-16 18:15:49,800 INFO [RS:3;jenkins-hbase4:44563] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@700b9517{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-16 18:15:49,802 INFO [RS:2;jenkins-hbase4:41927] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@53ca0225{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1ac4d196-f4be-3944-d331-ea30eb493ba6/hadoop.log.dir/,STOPPED} 2023-07-16 18:15:49,802 INFO [RS:3;jenkins-hbase4:44563] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@20df6e9f{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1ac4d196-f4be-3944-d331-ea30eb493ba6/hadoop.log.dir/,STOPPED} 2023-07-16 18:15:49,801 INFO [RS:0;jenkins-hbase4:33809] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-16 18:15:49,801 INFO [RS:1;jenkins-hbase4:43375] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-16 18:15:49,805 INFO [RS:0;jenkins-hbase4:33809] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@43302c52{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-16 18:15:49,805 INFO [RS:1;jenkins-hbase4:43375] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@26906e56{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-16 18:15:49,806 INFO [RS:3;jenkins-hbase4:44563] regionserver.HeapMemoryManager(220): Stopping 2023-07-16 18:15:49,806 INFO [RS:3;jenkins-hbase4:44563] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-16 18:15:49,807 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-16 18:15:49,807 INFO [RS:3;jenkins-hbase4:44563] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-16 18:15:49,807 INFO [RS:1;jenkins-hbase4:43375] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@1e5ca88b{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1ac4d196-f4be-3944-d331-ea30eb493ba6/hadoop.log.dir/,STOPPED} 2023-07-16 18:15:49,807 INFO [RS:3;jenkins-hbase4:44563] regionserver.HRegionServer(3305): Received CLOSE for dc4034c470728512f24450a6af763b38 2023-07-16 18:15:49,807 INFO [RS:0;jenkins-hbase4:33809] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@4d5f0290{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1ac4d196-f4be-3944-d331-ea30eb493ba6/hadoop.log.dir/,STOPPED} 2023-07-16 18:15:49,808 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing dc4034c470728512f24450a6af763b38, disabling compactions & flushes 2023-07-16 18:15:49,808 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689531326156.dc4034c470728512f24450a6af763b38. 2023-07-16 18:15:49,808 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689531326156.dc4034c470728512f24450a6af763b38. 2023-07-16 18:15:49,808 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689531326156.dc4034c470728512f24450a6af763b38. after waiting 0 ms 2023-07-16 18:15:49,808 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689531326156.dc4034c470728512f24450a6af763b38. 2023-07-16 18:15:49,808 INFO [RS:1;jenkins-hbase4:43375] regionserver.HeapMemoryManager(220): Stopping 2023-07-16 18:15:49,808 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing dc4034c470728512f24450a6af763b38 1/1 column families, dataSize=27.07 KB heapSize=44.69 KB 2023-07-16 18:15:49,809 INFO [RS:2;jenkins-hbase4:41927] regionserver.HeapMemoryManager(220): Stopping 2023-07-16 18:15:49,809 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-16 18:15:49,809 INFO [RS:3;jenkins-hbase4:44563] regionserver.HRegionServer(3305): Received CLOSE for 583941d24df0f42b80730ed46c98845b 2023-07-16 18:15:49,809 INFO [RS:3;jenkins-hbase4:44563] regionserver.HRegionServer(3305): Received CLOSE for 00c0b9125cfd04be97eeb4893c8c1908 2023-07-16 18:15:49,809 INFO [RS:3;jenkins-hbase4:44563] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,44563,1689531327107 2023-07-16 18:15:49,809 DEBUG [RS:3;jenkins-hbase4:44563] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x03688c79 to 127.0.0.1:53498 2023-07-16 18:15:49,809 INFO [RS:2;jenkins-hbase4:41927] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-16 18:15:49,810 INFO [RS:2;jenkins-hbase4:41927] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-16 18:15:49,810 INFO [RS:2;jenkins-hbase4:41927] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,41927,1689531323590 2023-07-16 18:15:49,810 DEBUG [RS:2;jenkins-hbase4:41927] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x06110cb7 to 127.0.0.1:53498 2023-07-16 18:15:49,810 DEBUG [RS:2;jenkins-hbase4:41927] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-16 18:15:49,810 INFO [RS:2;jenkins-hbase4:41927] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,41927,1689531323590; all regions closed. 2023-07-16 18:15:49,810 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-16 18:15:49,810 INFO [RS:1;jenkins-hbase4:43375] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-16 18:15:49,811 INFO [RS:1;jenkins-hbase4:43375] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-16 18:15:49,811 INFO [RS:1;jenkins-hbase4:43375] regionserver.HRegionServer(3305): Received CLOSE for 169e52c14d3c900a0913d50e0cfad311 2023-07-16 18:15:49,811 INFO [RS:1;jenkins-hbase4:43375] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,43375,1689531323422 2023-07-16 18:15:49,811 DEBUG [RS:1;jenkins-hbase4:43375] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x40207005 to 127.0.0.1:53498 2023-07-16 18:15:49,811 DEBUG [RS:1;jenkins-hbase4:43375] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-16 18:15:49,811 INFO [RS:1;jenkins-hbase4:43375] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-16 18:15:49,810 DEBUG [RS:3;jenkins-hbase4:44563] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-16 18:15:49,810 INFO [RS:0;jenkins-hbase4:33809] regionserver.HeapMemoryManager(220): Stopping 2023-07-16 18:15:49,811 INFO [RS:3;jenkins-hbase4:44563] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-16 18:15:49,811 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-16 18:15:49,811 INFO [RS:3;jenkins-hbase4:44563] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-16 18:15:49,811 INFO [RS:0;jenkins-hbase4:33809] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-16 18:15:49,812 INFO [RS:0;jenkins-hbase4:33809] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-16 18:15:49,812 INFO [RS:0;jenkins-hbase4:33809] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,33809,1689531323219 2023-07-16 18:15:49,812 DEBUG [RS:0;jenkins-hbase4:33809] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x7edfaaa7 to 127.0.0.1:53498 2023-07-16 18:15:49,812 DEBUG [RS:0;jenkins-hbase4:33809] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-16 18:15:49,812 INFO [RS:0;jenkins-hbase4:33809] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,33809,1689531323219; all regions closed. 2023-07-16 18:15:49,812 INFO [RS:3;jenkins-hbase4:44563] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-16 18:15:49,813 INFO [RS:3;jenkins-hbase4:44563] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-16 18:15:49,814 DEBUG [RS:1;jenkins-hbase4:43375] regionserver.HRegionServer(1478): Online Regions={169e52c14d3c900a0913d50e0cfad311=testRename,,1689531342473.169e52c14d3c900a0913d50e0cfad311.} 2023-07-16 18:15:49,815 DEBUG [RS:1;jenkins-hbase4:43375] regionserver.HRegionServer(1504): Waiting on 169e52c14d3c900a0913d50e0cfad311 2023-07-16 18:15:49,816 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 169e52c14d3c900a0913d50e0cfad311, disabling compactions & flushes 2023-07-16 18:15:49,816 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region testRename,,1689531342473.169e52c14d3c900a0913d50e0cfad311. 2023-07-16 18:15:49,816 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on testRename,,1689531342473.169e52c14d3c900a0913d50e0cfad311. 2023-07-16 18:15:49,816 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on testRename,,1689531342473.169e52c14d3c900a0913d50e0cfad311. after waiting 0 ms 2023-07-16 18:15:49,816 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region testRename,,1689531342473.169e52c14d3c900a0913d50e0cfad311. 2023-07-16 18:15:49,817 INFO [RS:3;jenkins-hbase4:44563] regionserver.HRegionServer(1474): Waiting on 4 regions to close 2023-07-16 18:15:49,817 DEBUG [RS:3;jenkins-hbase4:44563] regionserver.HRegionServer(1478): Online Regions={dc4034c470728512f24450a6af763b38=hbase:rsgroup,,1689531326156.dc4034c470728512f24450a6af763b38., 1588230740=hbase:meta,,1.1588230740, 583941d24df0f42b80730ed46c98845b=hbase:namespace,,1689531325937.583941d24df0f42b80730ed46c98845b., 00c0b9125cfd04be97eeb4893c8c1908=unmovedTable,,1689531344136.00c0b9125cfd04be97eeb4893c8c1908.} 2023-07-16 18:15:49,817 DEBUG [RS:3;jenkins-hbase4:44563] regionserver.HRegionServer(1504): Waiting on 00c0b9125cfd04be97eeb4893c8c1908, 1588230740, 583941d24df0f42b80730ed46c98845b, dc4034c470728512f24450a6af763b38 2023-07-16 18:15:49,818 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-16 18:15:49,818 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-16 18:15:49,818 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-16 18:15:49,818 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-16 18:15:49,818 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-16 18:15:49,818 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=76.59 KB heapSize=120.50 KB 2023-07-16 18:15:49,830 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-16 18:15:49,830 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-16 18:15:49,830 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-16 18:15:49,830 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-16 18:15:49,844 DEBUG [RS:0;jenkins-hbase4:33809] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/oldWALs 2023-07-16 18:15:49,844 INFO [RS:0;jenkins-hbase4:33809] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C33809%2C1689531323219.meta:.meta(num 1689531325679) 2023-07-16 18:15:49,845 DEBUG [RS:2;jenkins-hbase4:41927] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/oldWALs 2023-07-16 18:15:49,845 INFO [RS:2;jenkins-hbase4:41927] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C41927%2C1689531323590:(num 1689531325466) 2023-07-16 18:15:49,845 DEBUG [RS:2;jenkins-hbase4:41927] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-16 18:15:49,845 INFO [RS:2;jenkins-hbase4:41927] regionserver.LeaseManager(133): Closed leases 2023-07-16 18:15:49,850 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/testRename/169e52c14d3c900a0913d50e0cfad311/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=7 2023-07-16 18:15:49,851 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed testRename,,1689531342473.169e52c14d3c900a0913d50e0cfad311. 2023-07-16 18:15:49,851 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 169e52c14d3c900a0913d50e0cfad311: 2023-07-16 18:15:49,851 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed testRename,,1689531342473.169e52c14d3c900a0913d50e0cfad311. 2023-07-16 18:15:49,854 INFO [RS:2;jenkins-hbase4:41927] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-16 18:15:49,855 INFO [RS:2;jenkins-hbase4:41927] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-16 18:15:49,855 INFO [RS:2;jenkins-hbase4:41927] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-16 18:15:49,855 INFO [RS:2;jenkins-hbase4:41927] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-16 18:15:49,856 INFO [RS:2;jenkins-hbase4:41927] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:41927 2023-07-16 18:15:49,860 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-16 18:15:49,867 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=27.07 KB at sequenceid=101 (bloomFilter=true), to=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/hbase/rsgroup/dc4034c470728512f24450a6af763b38/.tmp/m/209d4d4fb7f641f694af0c365ec2a5a4 2023-07-16 18:15:49,871 DEBUG [RS:0;jenkins-hbase4:33809] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/oldWALs 2023-07-16 18:15:49,871 DEBUG [Listener at localhost/38073-EventThread] zookeeper.ZKWatcher(600): master:45445-0x1016f588ace0000, quorum=127.0.0.1:53498, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-16 18:15:49,871 DEBUG [Listener at localhost/38073-EventThread] zookeeper.ZKWatcher(600): regionserver:33809-0x1016f588ace0001, quorum=127.0.0.1:53498, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,41927,1689531323590 2023-07-16 18:15:49,871 DEBUG [Listener at localhost/38073-EventThread] zookeeper.ZKWatcher(600): regionserver:44563-0x1016f588ace000b, quorum=127.0.0.1:53498, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,41927,1689531323590 2023-07-16 18:15:49,871 DEBUG [Listener at localhost/38073-EventThread] zookeeper.ZKWatcher(600): regionserver:33809-0x1016f588ace0001, quorum=127.0.0.1:53498, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-16 18:15:49,871 INFO [RS:0;jenkins-hbase4:33809] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C33809%2C1689531323219:(num 1689531325466) 2023-07-16 18:15:49,871 DEBUG [RS:0;jenkins-hbase4:33809] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-16 18:15:49,871 INFO [RS:0;jenkins-hbase4:33809] regionserver.LeaseManager(133): Closed leases 2023-07-16 18:15:49,871 DEBUG [Listener at localhost/38073-EventThread] zookeeper.ZKWatcher(600): regionserver:44563-0x1016f588ace000b, quorum=127.0.0.1:53498, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-16 18:15:49,871 DEBUG [Listener at localhost/38073-EventThread] zookeeper.ZKWatcher(600): regionserver:41927-0x1016f588ace0003, quorum=127.0.0.1:53498, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,41927,1689531323590 2023-07-16 18:15:49,872 INFO [RS:0;jenkins-hbase4:33809] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-16 18:15:49,872 DEBUG [Listener at localhost/38073-EventThread] zookeeper.ZKWatcher(600): regionserver:41927-0x1016f588ace0003, quorum=127.0.0.1:53498, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-16 18:15:49,873 INFO [RS:0;jenkins-hbase4:33809] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-16 18:15:49,873 INFO [RS:0;jenkins-hbase4:33809] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-16 18:15:49,873 INFO [RS:0;jenkins-hbase4:33809] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-16 18:15:49,873 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,41927,1689531323590] 2023-07-16 18:15:49,873 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-16 18:15:49,873 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,41927,1689531323590; numProcessing=1 2023-07-16 18:15:49,874 INFO [RS:0;jenkins-hbase4:33809] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:33809 2023-07-16 18:15:49,873 DEBUG [Listener at localhost/38073-EventThread] zookeeper.ZKWatcher(600): regionserver:43375-0x1016f588ace0002, quorum=127.0.0.1:53498, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,41927,1689531323590 2023-07-16 18:15:49,874 DEBUG [Listener at localhost/38073-EventThread] zookeeper.ZKWatcher(600): regionserver:43375-0x1016f588ace0002, quorum=127.0.0.1:53498, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-16 18:15:49,876 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,41927,1689531323590 already deleted, retry=false 2023-07-16 18:15:49,876 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,41927,1689531323590 expired; onlineServers=3 2023-07-16 18:15:49,876 DEBUG [Listener at localhost/38073-EventThread] zookeeper.ZKWatcher(600): master:45445-0x1016f588ace0000, quorum=127.0.0.1:53498, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-16 18:15:49,876 DEBUG [Listener at localhost/38073-EventThread] zookeeper.ZKWatcher(600): regionserver:43375-0x1016f588ace0002, quorum=127.0.0.1:53498, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,33809,1689531323219 2023-07-16 18:15:49,876 DEBUG [Listener at localhost/38073-EventThread] zookeeper.ZKWatcher(600): regionserver:44563-0x1016f588ace000b, quorum=127.0.0.1:53498, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,33809,1689531323219 2023-07-16 18:15:49,877 DEBUG [Listener at localhost/38073-EventThread] zookeeper.ZKWatcher(600): regionserver:33809-0x1016f588ace0001, quorum=127.0.0.1:53498, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,33809,1689531323219 2023-07-16 18:15:49,877 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,33809,1689531323219] 2023-07-16 18:15:49,877 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,33809,1689531323219; numProcessing=2 2023-07-16 18:15:49,879 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,33809,1689531323219 already deleted, retry=false 2023-07-16 18:15:49,879 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 209d4d4fb7f641f694af0c365ec2a5a4 2023-07-16 18:15:49,880 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,33809,1689531323219 expired; onlineServers=2 2023-07-16 18:15:49,881 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/hbase/rsgroup/dc4034c470728512f24450a6af763b38/.tmp/m/209d4d4fb7f641f694af0c365ec2a5a4 as hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/hbase/rsgroup/dc4034c470728512f24450a6af763b38/m/209d4d4fb7f641f694af0c365ec2a5a4 2023-07-16 18:15:49,882 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=70.78 KB at sequenceid=210 (bloomFilter=false), to=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/hbase/meta/1588230740/.tmp/info/56f46ce448554b0d918dc9677d997dca 2023-07-16 18:15:49,888 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 209d4d4fb7f641f694af0c365ec2a5a4 2023-07-16 18:15:49,888 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/hbase/rsgroup/dc4034c470728512f24450a6af763b38/m/209d4d4fb7f641f694af0c365ec2a5a4, entries=28, sequenceid=101, filesize=6.1 K 2023-07-16 18:15:49,889 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 56f46ce448554b0d918dc9677d997dca 2023-07-16 18:15:49,889 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~27.07 KB/27723, heapSize ~44.67 KB/45744, currentSize=0 B/0 for dc4034c470728512f24450a6af763b38 in 81ms, sequenceid=101, compaction requested=false 2023-07-16 18:15:49,896 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/hbase/rsgroup/dc4034c470728512f24450a6af763b38/recovered.edits/104.seqid, newMaxSeqId=104, maxSeqId=12 2023-07-16 18:15:49,897 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-16 18:15:49,897 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689531326156.dc4034c470728512f24450a6af763b38. 2023-07-16 18:15:49,897 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for dc4034c470728512f24450a6af763b38: 2023-07-16 18:15:49,897 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:rsgroup,,1689531326156.dc4034c470728512f24450a6af763b38. 2023-07-16 18:15:49,898 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 583941d24df0f42b80730ed46c98845b, disabling compactions & flushes 2023-07-16 18:15:49,898 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689531325937.583941d24df0f42b80730ed46c98845b. 2023-07-16 18:15:49,898 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689531325937.583941d24df0f42b80730ed46c98845b. 2023-07-16 18:15:49,898 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689531325937.583941d24df0f42b80730ed46c98845b. after waiting 0 ms 2023-07-16 18:15:49,898 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689531325937.583941d24df0f42b80730ed46c98845b. 2023-07-16 18:15:49,907 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/hbase/namespace/583941d24df0f42b80730ed46c98845b/recovered.edits/12.seqid, newMaxSeqId=12, maxSeqId=9 2023-07-16 18:15:49,908 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689531325937.583941d24df0f42b80730ed46c98845b. 2023-07-16 18:15:49,909 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 583941d24df0f42b80730ed46c98845b: 2023-07-16 18:15:49,909 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1689531325937.583941d24df0f42b80730ed46c98845b. 2023-07-16 18:15:49,912 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 00c0b9125cfd04be97eeb4893c8c1908, disabling compactions & flushes 2023-07-16 18:15:49,913 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region unmovedTable,,1689531344136.00c0b9125cfd04be97eeb4893c8c1908. 2023-07-16 18:15:49,913 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on unmovedTable,,1689531344136.00c0b9125cfd04be97eeb4893c8c1908. 2023-07-16 18:15:49,913 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on unmovedTable,,1689531344136.00c0b9125cfd04be97eeb4893c8c1908. after waiting 0 ms 2023-07-16 18:15:49,913 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region unmovedTable,,1689531344136.00c0b9125cfd04be97eeb4893c8c1908. 2023-07-16 18:15:49,917 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/default/unmovedTable/00c0b9125cfd04be97eeb4893c8c1908/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=7 2023-07-16 18:15:49,918 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed unmovedTable,,1689531344136.00c0b9125cfd04be97eeb4893c8c1908. 2023-07-16 18:15:49,918 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 00c0b9125cfd04be97eeb4893c8c1908: 2023-07-16 18:15:49,919 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed unmovedTable,,1689531344136.00c0b9125cfd04be97eeb4893c8c1908. 2023-07-16 18:15:49,919 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=2 KB at sequenceid=210 (bloomFilter=false), to=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/hbase/meta/1588230740/.tmp/rep_barrier/92ef78b5183e48cdb2ca4d1184605287 2023-07-16 18:15:49,925 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 92ef78b5183e48cdb2ca4d1184605287 2023-07-16 18:15:50,015 INFO [RS:1;jenkins-hbase4:43375] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,43375,1689531323422; all regions closed. 2023-07-16 18:15:50,018 DEBUG [RS:3;jenkins-hbase4:44563] regionserver.HRegionServer(1504): Waiting on 1588230740 2023-07-16 18:15:50,023 DEBUG [RS:1;jenkins-hbase4:43375] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/oldWALs 2023-07-16 18:15:50,023 INFO [RS:1;jenkins-hbase4:43375] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C43375%2C1689531323422:(num 1689531325470) 2023-07-16 18:15:50,023 DEBUG [RS:1;jenkins-hbase4:43375] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-16 18:15:50,023 INFO [RS:1;jenkins-hbase4:43375] regionserver.LeaseManager(133): Closed leases 2023-07-16 18:15:50,023 INFO [RS:1;jenkins-hbase4:43375] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-16 18:15:50,023 INFO [RS:1;jenkins-hbase4:43375] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-16 18:15:50,023 INFO [RS:1;jenkins-hbase4:43375] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-16 18:15:50,023 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-16 18:15:50,023 INFO [RS:1;jenkins-hbase4:43375] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-16 18:15:50,024 INFO [RS:1;jenkins-hbase4:43375] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:43375 2023-07-16 18:15:50,026 DEBUG [Listener at localhost/38073-EventThread] zookeeper.ZKWatcher(600): regionserver:44563-0x1016f588ace000b, quorum=127.0.0.1:53498, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,43375,1689531323422 2023-07-16 18:15:50,026 DEBUG [Listener at localhost/38073-EventThread] zookeeper.ZKWatcher(600): master:45445-0x1016f588ace0000, quorum=127.0.0.1:53498, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-16 18:15:50,026 DEBUG [Listener at localhost/38073-EventThread] zookeeper.ZKWatcher(600): regionserver:43375-0x1016f588ace0002, quorum=127.0.0.1:53498, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,43375,1689531323422 2023-07-16 18:15:50,027 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,43375,1689531323422] 2023-07-16 18:15:50,027 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,43375,1689531323422; numProcessing=3 2023-07-16 18:15:50,029 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,43375,1689531323422 already deleted, retry=false 2023-07-16 18:15:50,029 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,43375,1689531323422 expired; onlineServers=1 2023-07-16 18:15:50,219 DEBUG [RS:3;jenkins-hbase4:44563] regionserver.HRegionServer(1504): Waiting on 1588230740 2023-07-16 18:15:50,341 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=3.81 KB at sequenceid=210 (bloomFilter=false), to=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/hbase/meta/1588230740/.tmp/table/ddf851373843471eb60f5cb89c8f87df 2023-07-16 18:15:50,348 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for ddf851373843471eb60f5cb89c8f87df 2023-07-16 18:15:50,349 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/hbase/meta/1588230740/.tmp/info/56f46ce448554b0d918dc9677d997dca as hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/hbase/meta/1588230740/info/56f46ce448554b0d918dc9677d997dca 2023-07-16 18:15:50,355 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 56f46ce448554b0d918dc9677d997dca 2023-07-16 18:15:50,356 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/hbase/meta/1588230740/info/56f46ce448554b0d918dc9677d997dca, entries=93, sequenceid=210, filesize=15.5 K 2023-07-16 18:15:50,356 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/hbase/meta/1588230740/.tmp/rep_barrier/92ef78b5183e48cdb2ca4d1184605287 as hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/hbase/meta/1588230740/rep_barrier/92ef78b5183e48cdb2ca4d1184605287 2023-07-16 18:15:50,363 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 92ef78b5183e48cdb2ca4d1184605287 2023-07-16 18:15:50,363 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/hbase/meta/1588230740/rep_barrier/92ef78b5183e48cdb2ca4d1184605287, entries=18, sequenceid=210, filesize=6.9 K 2023-07-16 18:15:50,364 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/hbase/meta/1588230740/.tmp/table/ddf851373843471eb60f5cb89c8f87df as hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/hbase/meta/1588230740/table/ddf851373843471eb60f5cb89c8f87df 2023-07-16 18:15:50,369 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for ddf851373843471eb60f5cb89c8f87df 2023-07-16 18:15:50,369 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/hbase/meta/1588230740/table/ddf851373843471eb60f5cb89c8f87df, entries=27, sequenceid=210, filesize=7.2 K 2023-07-16 18:15:50,370 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~76.59 KB/78427, heapSize ~120.45 KB/123344, currentSize=0 B/0 for 1588230740 in 552ms, sequenceid=210, compaction requested=false 2023-07-16 18:15:50,383 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/data/hbase/meta/1588230740/recovered.edits/213.seqid, newMaxSeqId=213, maxSeqId=18 2023-07-16 18:15:50,384 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-16 18:15:50,385 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-16 18:15:50,385 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-16 18:15:50,385 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-16 18:15:50,397 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: CompactionChecker was stopped 2023-07-16 18:15:50,397 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: MemstoreFlusherChore was stopped 2023-07-16 18:15:50,419 INFO [RS:3;jenkins-hbase4:44563] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,44563,1689531327107; all regions closed. 2023-07-16 18:15:50,427 DEBUG [RS:3;jenkins-hbase4:44563] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/oldWALs 2023-07-16 18:15:50,427 INFO [RS:3;jenkins-hbase4:44563] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C44563%2C1689531327107.meta:.meta(num 1689531328319) 2023-07-16 18:15:50,433 DEBUG [RS:3;jenkins-hbase4:44563] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/oldWALs 2023-07-16 18:15:50,433 INFO [RS:3;jenkins-hbase4:44563] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C44563%2C1689531327107:(num 1689531327539) 2023-07-16 18:15:50,434 DEBUG [RS:3;jenkins-hbase4:44563] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-16 18:15:50,434 INFO [RS:3;jenkins-hbase4:44563] regionserver.LeaseManager(133): Closed leases 2023-07-16 18:15:50,434 INFO [RS:3;jenkins-hbase4:44563] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-16 18:15:50,434 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-16 18:15:50,435 INFO [RS:3;jenkins-hbase4:44563] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:44563 2023-07-16 18:15:50,438 DEBUG [Listener at localhost/38073-EventThread] zookeeper.ZKWatcher(600): master:45445-0x1016f588ace0000, quorum=127.0.0.1:53498, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-16 18:15:50,439 DEBUG [Listener at localhost/38073-EventThread] zookeeper.ZKWatcher(600): regionserver:44563-0x1016f588ace000b, quorum=127.0.0.1:53498, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,44563,1689531327107 2023-07-16 18:15:50,439 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,44563,1689531327107] 2023-07-16 18:15:50,439 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,44563,1689531327107; numProcessing=4 2023-07-16 18:15:50,441 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,44563,1689531327107 already deleted, retry=false 2023-07-16 18:15:50,441 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,44563,1689531327107 expired; onlineServers=0 2023-07-16 18:15:50,441 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,45445,1689531321197' ***** 2023-07-16 18:15:50,441 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-16 18:15:50,442 DEBUG [M:0;jenkins-hbase4:45445] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@61350ab7, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-16 18:15:50,443 INFO [M:0;jenkins-hbase4:45445] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-16 18:15:50,445 DEBUG [Listener at localhost/38073-EventThread] zookeeper.ZKWatcher(600): master:45445-0x1016f588ace0000, quorum=127.0.0.1:53498, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-16 18:15:50,445 DEBUG [Listener at localhost/38073-EventThread] zookeeper.ZKWatcher(600): master:45445-0x1016f588ace0000, quorum=127.0.0.1:53498, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-16 18:15:50,446 INFO [M:0;jenkins-hbase4:45445] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@1f4374a9{master,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-16 18:15:50,446 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:45445-0x1016f588ace0000, quorum=127.0.0.1:53498, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-16 18:15:50,446 INFO [M:0;jenkins-hbase4:45445] server.AbstractConnector(383): Stopped ServerConnector@5ee050b2{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-16 18:15:50,446 INFO [M:0;jenkins-hbase4:45445] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-16 18:15:50,447 INFO [M:0;jenkins-hbase4:45445] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@3ebc6750{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-16 18:15:50,447 INFO [M:0;jenkins-hbase4:45445] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@1ee15e41{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1ac4d196-f4be-3944-d331-ea30eb493ba6/hadoop.log.dir/,STOPPED} 2023-07-16 18:15:50,448 INFO [M:0;jenkins-hbase4:45445] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,45445,1689531321197 2023-07-16 18:15:50,448 INFO [M:0;jenkins-hbase4:45445] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,45445,1689531321197; all regions closed. 2023-07-16 18:15:50,448 DEBUG [M:0;jenkins-hbase4:45445] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-16 18:15:50,448 INFO [M:0;jenkins-hbase4:45445] master.HMaster(1491): Stopping master jetty server 2023-07-16 18:15:50,449 INFO [M:0;jenkins-hbase4:45445] server.AbstractConnector(383): Stopped ServerConnector@10d7fd75{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-16 18:15:50,450 DEBUG [M:0;jenkins-hbase4:45445] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-16 18:15:50,450 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-16 18:15:50,450 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689531325031] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689531325031,5,FailOnTimeoutGroup] 2023-07-16 18:15:50,450 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689531325026] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689531325026,5,FailOnTimeoutGroup] 2023-07-16 18:15:50,450 DEBUG [M:0;jenkins-hbase4:45445] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-16 18:15:50,450 INFO [M:0;jenkins-hbase4:45445] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-16 18:15:50,450 INFO [M:0;jenkins-hbase4:45445] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-16 18:15:50,450 INFO [M:0;jenkins-hbase4:45445] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [] on shutdown 2023-07-16 18:15:50,451 DEBUG [M:0;jenkins-hbase4:45445] master.HMaster(1512): Stopping service threads 2023-07-16 18:15:50,451 INFO [M:0;jenkins-hbase4:45445] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-16 18:15:50,451 ERROR [M:0;jenkins-hbase4:45445] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] Thread[HFileArchiver-1,5,PEWorkerGroup] Thread[HFileArchiver-2,5,PEWorkerGroup] Thread[HFileArchiver-3,5,PEWorkerGroup] Thread[HFileArchiver-4,5,PEWorkerGroup] Thread[HFileArchiver-5,5,PEWorkerGroup] Thread[HFileArchiver-6,5,PEWorkerGroup] Thread[HFileArchiver-7,5,PEWorkerGroup] Thread[HFileArchiver-8,5,PEWorkerGroup] 2023-07-16 18:15:50,454 INFO [M:0;jenkins-hbase4:45445] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-16 18:15:50,456 DEBUG [M:0;jenkins-hbase4:45445] zookeeper.ZKUtil(398): master:45445-0x1016f588ace0000, quorum=127.0.0.1:53498, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-07-16 18:15:50,457 WARN [M:0;jenkins-hbase4:45445] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-07-16 18:15:50,457 INFO [M:0;jenkins-hbase4:45445] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-16 18:15:50,457 INFO [M:0;jenkins-hbase4:45445] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-16 18:15:50,457 DEBUG [M:0;jenkins-hbase4:45445] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-16 18:15:50,457 INFO [M:0;jenkins-hbase4:45445] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-16 18:15:50,457 DEBUG [M:0;jenkins-hbase4:45445] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-16 18:15:50,457 DEBUG [M:0;jenkins-hbase4:45445] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-16 18:15:50,457 DEBUG [M:0;jenkins-hbase4:45445] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-16 18:15:50,457 INFO [M:0;jenkins-hbase4:45445] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=519.26 KB heapSize=621.42 KB 2023-07-16 18:15:50,461 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-16 18:15:50,473 DEBUG [Listener at localhost/38073-EventThread] zookeeper.ZKWatcher(600): regionserver:43375-0x1016f588ace0002, quorum=127.0.0.1:53498, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-16 18:15:50,473 INFO [RS:1;jenkins-hbase4:43375] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,43375,1689531323422; zookeeper connection closed. 2023-07-16 18:15:50,473 DEBUG [Listener at localhost/38073-EventThread] zookeeper.ZKWatcher(600): regionserver:43375-0x1016f588ace0002, quorum=127.0.0.1:53498, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-16 18:15:50,473 INFO [M:0;jenkins-hbase4:45445] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=519.26 KB at sequenceid=1152 (bloomFilter=true), to=hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/8770c35cf7c04c25b07f8150191fa9fc 2023-07-16 18:15:50,473 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@480d7391] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@480d7391 2023-07-16 18:15:50,480 DEBUG [M:0;jenkins-hbase4:45445] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/8770c35cf7c04c25b07f8150191fa9fc as hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/8770c35cf7c04c25b07f8150191fa9fc 2023-07-16 18:15:50,486 INFO [M:0;jenkins-hbase4:45445] regionserver.HStore(1080): Added hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/8770c35cf7c04c25b07f8150191fa9fc, entries=154, sequenceid=1152, filesize=27.1 K 2023-07-16 18:15:50,487 INFO [M:0;jenkins-hbase4:45445] regionserver.HRegion(2948): Finished flush of dataSize ~519.26 KB/531724, heapSize ~621.41 KB/636320, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 29ms, sequenceid=1152, compaction requested=false 2023-07-16 18:15:50,489 INFO [M:0;jenkins-hbase4:45445] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-16 18:15:50,489 DEBUG [M:0;jenkins-hbase4:45445] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-16 18:15:50,497 WARN [Close-WAL-Writer-0] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(641): complete file /user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/MasterData/WALs/jenkins-hbase4.apache.org,45445,1689531321197/jenkins-hbase4.apache.org%2C45445%2C1689531321197.1689531324225 not finished, retry = 0 2023-07-16 18:15:50,573 DEBUG [Listener at localhost/38073-EventThread] zookeeper.ZKWatcher(600): regionserver:33809-0x1016f588ace0001, quorum=127.0.0.1:53498, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-16 18:15:50,573 INFO [RS:0;jenkins-hbase4:33809] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,33809,1689531323219; zookeeper connection closed. 2023-07-16 18:15:50,573 DEBUG [Listener at localhost/38073-EventThread] zookeeper.ZKWatcher(600): regionserver:33809-0x1016f588ace0001, quorum=127.0.0.1:53498, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-16 18:15:50,573 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@574d3121] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@574d3121 2023-07-16 18:15:50,598 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-16 18:15:50,598 INFO [M:0;jenkins-hbase4:45445] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-16 18:15:50,599 INFO [M:0;jenkins-hbase4:45445] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:45445 2023-07-16 18:15:50,602 DEBUG [M:0;jenkins-hbase4:45445] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,45445,1689531321197 already deleted, retry=false 2023-07-16 18:15:50,673 DEBUG [Listener at localhost/38073-EventThread] zookeeper.ZKWatcher(600): regionserver:41927-0x1016f588ace0003, quorum=127.0.0.1:53498, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-16 18:15:50,673 DEBUG [Listener at localhost/38073-EventThread] zookeeper.ZKWatcher(600): regionserver:41927-0x1016f588ace0003, quorum=127.0.0.1:53498, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-16 18:15:50,673 INFO [RS:2;jenkins-hbase4:41927] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,41927,1689531323590; zookeeper connection closed. 2023-07-16 18:15:50,675 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@725c8b31] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@725c8b31 2023-07-16 18:15:50,774 DEBUG [Listener at localhost/38073-EventThread] zookeeper.ZKWatcher(600): master:45445-0x1016f588ace0000, quorum=127.0.0.1:53498, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-16 18:15:50,774 INFO [M:0;jenkins-hbase4:45445] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,45445,1689531321197; zookeeper connection closed. 2023-07-16 18:15:50,774 DEBUG [Listener at localhost/38073-EventThread] zookeeper.ZKWatcher(600): master:45445-0x1016f588ace0000, quorum=127.0.0.1:53498, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-16 18:15:50,874 DEBUG [Listener at localhost/38073-EventThread] zookeeper.ZKWatcher(600): regionserver:44563-0x1016f588ace000b, quorum=127.0.0.1:53498, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-16 18:15:50,874 INFO [RS:3;jenkins-hbase4:44563] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,44563,1689531327107; zookeeper connection closed. 2023-07-16 18:15:50,874 DEBUG [Listener at localhost/38073-EventThread] zookeeper.ZKWatcher(600): regionserver:44563-0x1016f588ace000b, quorum=127.0.0.1:53498, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-16 18:15:50,874 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@364e8f95] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@364e8f95 2023-07-16 18:15:50,874 INFO [Listener at localhost/38073] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 4 regionserver(s) complete 2023-07-16 18:15:50,875 WARN [Listener at localhost/38073] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-16 18:15:50,880 INFO [Listener at localhost/38073] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-16 18:15:50,984 WARN [BP-761019734-172.31.14.131-1689531317183 heartbeating to localhost/127.0.0.1:36523] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-16 18:15:50,984 WARN [BP-761019734-172.31.14.131-1689531317183 heartbeating to localhost/127.0.0.1:36523] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-761019734-172.31.14.131-1689531317183 (Datanode Uuid 0f72b657-df25-4af1-8609-e86c97193b0a) service to localhost/127.0.0.1:36523 2023-07-16 18:15:50,986 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1ac4d196-f4be-3944-d331-ea30eb493ba6/cluster_87eeeeb5-5880-a92c-ac20-b2d9553ef3c2/dfs/data/data5/current/BP-761019734-172.31.14.131-1689531317183] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-16 18:15:50,986 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1ac4d196-f4be-3944-d331-ea30eb493ba6/cluster_87eeeeb5-5880-a92c-ac20-b2d9553ef3c2/dfs/data/data6/current/BP-761019734-172.31.14.131-1689531317183] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-16 18:15:50,988 WARN [Listener at localhost/38073] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-16 18:15:50,990 INFO [Listener at localhost/38073] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-16 18:15:51,093 WARN [BP-761019734-172.31.14.131-1689531317183 heartbeating to localhost/127.0.0.1:36523] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-16 18:15:51,094 WARN [BP-761019734-172.31.14.131-1689531317183 heartbeating to localhost/127.0.0.1:36523] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-761019734-172.31.14.131-1689531317183 (Datanode Uuid fcd6476b-4f3d-4459-9937-6defe535eaf1) service to localhost/127.0.0.1:36523 2023-07-16 18:15:51,094 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1ac4d196-f4be-3944-d331-ea30eb493ba6/cluster_87eeeeb5-5880-a92c-ac20-b2d9553ef3c2/dfs/data/data3/current/BP-761019734-172.31.14.131-1689531317183] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-16 18:15:51,095 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1ac4d196-f4be-3944-d331-ea30eb493ba6/cluster_87eeeeb5-5880-a92c-ac20-b2d9553ef3c2/dfs/data/data4/current/BP-761019734-172.31.14.131-1689531317183] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-16 18:15:51,096 WARN [Listener at localhost/38073] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-16 18:15:51,098 INFO [Listener at localhost/38073] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-16 18:15:51,201 WARN [BP-761019734-172.31.14.131-1689531317183 heartbeating to localhost/127.0.0.1:36523] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-16 18:15:51,201 WARN [BP-761019734-172.31.14.131-1689531317183 heartbeating to localhost/127.0.0.1:36523] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-761019734-172.31.14.131-1689531317183 (Datanode Uuid fdd5abaa-ab30-459a-874d-e3e11aad81f8) service to localhost/127.0.0.1:36523 2023-07-16 18:15:51,202 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1ac4d196-f4be-3944-d331-ea30eb493ba6/cluster_87eeeeb5-5880-a92c-ac20-b2d9553ef3c2/dfs/data/data1/current/BP-761019734-172.31.14.131-1689531317183] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-16 18:15:51,202 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1ac4d196-f4be-3944-d331-ea30eb493ba6/cluster_87eeeeb5-5880-a92c-ac20-b2d9553ef3c2/dfs/data/data2/current/BP-761019734-172.31.14.131-1689531317183] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-16 18:15:51,235 INFO [Listener at localhost/38073] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-16 18:15:51,355 INFO [Listener at localhost/38073] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-07-16 18:15:51,403 INFO [Listener at localhost/38073] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-07-16 18:15:51,403 INFO [Listener at localhost/38073] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=3, rsPorts=, rsClass=null, numDataNodes=3, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-07-16 18:15:51,403 INFO [Listener at localhost/38073] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1ac4d196-f4be-3944-d331-ea30eb493ba6/hadoop.log.dir so I do NOT create it in target/test-data/8ccd5c03-7dd0-0741-9334-1f163324ab0d 2023-07-16 18:15:51,403 INFO [Listener at localhost/38073] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/1ac4d196-f4be-3944-d331-ea30eb493ba6/hadoop.tmp.dir so I do NOT create it in target/test-data/8ccd5c03-7dd0-0741-9334-1f163324ab0d 2023-07-16 18:15:51,403 INFO [Listener at localhost/38073] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8ccd5c03-7dd0-0741-9334-1f163324ab0d/cluster_3f1a6989-300d-fb2d-a4a0-f96584988f7d, deleteOnExit=true 2023-07-16 18:15:51,403 INFO [Listener at localhost/38073] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-07-16 18:15:51,403 INFO [Listener at localhost/38073] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8ccd5c03-7dd0-0741-9334-1f163324ab0d/test.cache.data in system properties and HBase conf 2023-07-16 18:15:51,404 INFO [Listener at localhost/38073] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8ccd5c03-7dd0-0741-9334-1f163324ab0d/hadoop.tmp.dir in system properties and HBase conf 2023-07-16 18:15:51,404 INFO [Listener at localhost/38073] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8ccd5c03-7dd0-0741-9334-1f163324ab0d/hadoop.log.dir in system properties and HBase conf 2023-07-16 18:15:51,404 INFO [Listener at localhost/38073] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8ccd5c03-7dd0-0741-9334-1f163324ab0d/mapreduce.cluster.local.dir in system properties and HBase conf 2023-07-16 18:15:51,404 INFO [Listener at localhost/38073] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8ccd5c03-7dd0-0741-9334-1f163324ab0d/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-07-16 18:15:51,404 INFO [Listener at localhost/38073] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-07-16 18:15:51,404 DEBUG [Listener at localhost/38073] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-07-16 18:15:51,404 INFO [Listener at localhost/38073] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8ccd5c03-7dd0-0741-9334-1f163324ab0d/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-07-16 18:15:51,404 INFO [Listener at localhost/38073] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8ccd5c03-7dd0-0741-9334-1f163324ab0d/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-07-16 18:15:51,405 INFO [Listener at localhost/38073] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8ccd5c03-7dd0-0741-9334-1f163324ab0d/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-07-16 18:15:51,405 INFO [Listener at localhost/38073] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8ccd5c03-7dd0-0741-9334-1f163324ab0d/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-16 18:15:51,405 INFO [Listener at localhost/38073] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8ccd5c03-7dd0-0741-9334-1f163324ab0d/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-07-16 18:15:51,405 INFO [Listener at localhost/38073] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8ccd5c03-7dd0-0741-9334-1f163324ab0d/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-07-16 18:15:51,405 INFO [Listener at localhost/38073] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8ccd5c03-7dd0-0741-9334-1f163324ab0d/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-16 18:15:51,405 INFO [Listener at localhost/38073] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8ccd5c03-7dd0-0741-9334-1f163324ab0d/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-16 18:15:51,405 INFO [Listener at localhost/38073] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8ccd5c03-7dd0-0741-9334-1f163324ab0d/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-07-16 18:15:51,405 INFO [Listener at localhost/38073] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8ccd5c03-7dd0-0741-9334-1f163324ab0d/nfs.dump.dir in system properties and HBase conf 2023-07-16 18:15:51,405 INFO [Listener at localhost/38073] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8ccd5c03-7dd0-0741-9334-1f163324ab0d/java.io.tmpdir in system properties and HBase conf 2023-07-16 18:15:51,405 INFO [Listener at localhost/38073] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8ccd5c03-7dd0-0741-9334-1f163324ab0d/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-16 18:15:51,406 INFO [Listener at localhost/38073] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8ccd5c03-7dd0-0741-9334-1f163324ab0d/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-07-16 18:15:51,406 INFO [Listener at localhost/38073] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8ccd5c03-7dd0-0741-9334-1f163324ab0d/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-07-16 18:15:51,410 WARN [Listener at localhost/38073] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-16 18:15:51,410 WARN [Listener at localhost/38073] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-16 18:15:51,447 WARN [Listener at localhost/38073] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-namenode.properties,hadoop-metrics2.properties 2023-07-16 18:15:51,452 DEBUG [Listener at localhost/38073-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient-0x1016f588ace000a, quorum=127.0.0.1:53498, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Disconnected, path=null 2023-07-16 18:15:51,452 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(630): VerifyingRSGroupAdminClient-0x1016f588ace000a, quorum=127.0.0.1:53498, baseZNode=/hbase Received Disconnected from ZooKeeper, ignoring 2023-07-16 18:15:51,509 WARN [Listener at localhost/38073] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-16 18:15:51,512 INFO [Listener at localhost/38073] log.Slf4jLog(67): jetty-6.1.26 2023-07-16 18:15:51,520 INFO [Listener at localhost/38073] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8ccd5c03-7dd0-0741-9334-1f163324ab0d/java.io.tmpdir/Jetty_localhost_37799_hdfs____.frpz1f/webapp 2023-07-16 18:15:51,651 INFO [Listener at localhost/38073] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:37799 2023-07-16 18:15:51,657 WARN [Listener at localhost/38073] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-16 18:15:51,657 WARN [Listener at localhost/38073] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-16 18:15:51,705 WARN [Listener at localhost/40765] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-16 18:15:51,728 WARN [Listener at localhost/40765] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-16 18:15:51,730 WARN [Listener at localhost/40765] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-16 18:15:51,732 INFO [Listener at localhost/40765] log.Slf4jLog(67): jetty-6.1.26 2023-07-16 18:15:51,736 INFO [Listener at localhost/40765] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8ccd5c03-7dd0-0741-9334-1f163324ab0d/java.io.tmpdir/Jetty_localhost_46093_datanode____eqh4rz/webapp 2023-07-16 18:15:51,790 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(152): Removing adapter for the MetricRegistry: RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-16 18:15:51,790 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(152): Removing adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-16 18:15:51,791 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(152): Removing adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-16 18:15:51,832 INFO [Listener at localhost/40765] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:46093 2023-07-16 18:15:51,839 WARN [Listener at localhost/36611] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-16 18:15:51,862 WARN [Listener at localhost/36611] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-16 18:15:51,865 WARN [Listener at localhost/36611] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-16 18:15:51,866 INFO [Listener at localhost/36611] log.Slf4jLog(67): jetty-6.1.26 2023-07-16 18:15:51,870 INFO [Listener at localhost/36611] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8ccd5c03-7dd0-0741-9334-1f163324ab0d/java.io.tmpdir/Jetty_localhost_39821_datanode____lmc8u0/webapp 2023-07-16 18:15:51,952 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x1663ea91042cb601: Processing first storage report for DS-29c49014-e67d-4232-82d5-0b72dfba0857 from datanode ee7e8b8d-e553-4339-83a7-e7e361bf8bba 2023-07-16 18:15:51,953 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x1663ea91042cb601: from storage DS-29c49014-e67d-4232-82d5-0b72dfba0857 node DatanodeRegistration(127.0.0.1:43791, datanodeUuid=ee7e8b8d-e553-4339-83a7-e7e361bf8bba, infoPort=37873, infoSecurePort=0, ipcPort=36611, storageInfo=lv=-57;cid=testClusterID;nsid=1159153530;c=1689531351413), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-16 18:15:51,953 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x1663ea91042cb601: Processing first storage report for DS-7ebd092d-5f20-4571-b01a-8085d443ccb9 from datanode ee7e8b8d-e553-4339-83a7-e7e361bf8bba 2023-07-16 18:15:51,953 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x1663ea91042cb601: from storage DS-7ebd092d-5f20-4571-b01a-8085d443ccb9 node DatanodeRegistration(127.0.0.1:43791, datanodeUuid=ee7e8b8d-e553-4339-83a7-e7e361bf8bba, infoPort=37873, infoSecurePort=0, ipcPort=36611, storageInfo=lv=-57;cid=testClusterID;nsid=1159153530;c=1689531351413), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-16 18:15:51,984 INFO [Listener at localhost/36611] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:39821 2023-07-16 18:15:51,994 WARN [Listener at localhost/43051] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-16 18:15:52,035 WARN [Listener at localhost/43051] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-16 18:15:52,038 WARN [Listener at localhost/43051] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-16 18:15:52,040 INFO [Listener at localhost/43051] log.Slf4jLog(67): jetty-6.1.26 2023-07-16 18:15:52,046 INFO [Listener at localhost/43051] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8ccd5c03-7dd0-0741-9334-1f163324ab0d/java.io.tmpdir/Jetty_localhost_37367_datanode____.6nebbd/webapp 2023-07-16 18:15:52,118 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xbffaaa9c8afbfe34: Processing first storage report for DS-695eb77d-ee54-4846-805d-18783915c745 from datanode 72410e25-76b2-421c-bc45-56632eaac1de 2023-07-16 18:15:52,118 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xbffaaa9c8afbfe34: from storage DS-695eb77d-ee54-4846-805d-18783915c745 node DatanodeRegistration(127.0.0.1:34759, datanodeUuid=72410e25-76b2-421c-bc45-56632eaac1de, infoPort=44777, infoSecurePort=0, ipcPort=43051, storageInfo=lv=-57;cid=testClusterID;nsid=1159153530;c=1689531351413), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-16 18:15:52,118 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xbffaaa9c8afbfe34: Processing first storage report for DS-8adea777-827e-49f6-877f-3bbe11ddb896 from datanode 72410e25-76b2-421c-bc45-56632eaac1de 2023-07-16 18:15:52,119 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xbffaaa9c8afbfe34: from storage DS-8adea777-827e-49f6-877f-3bbe11ddb896 node DatanodeRegistration(127.0.0.1:34759, datanodeUuid=72410e25-76b2-421c-bc45-56632eaac1de, infoPort=44777, infoSecurePort=0, ipcPort=43051, storageInfo=lv=-57;cid=testClusterID;nsid=1159153530;c=1689531351413), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-16 18:15:52,164 INFO [Listener at localhost/43051] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:37367 2023-07-16 18:15:52,182 WARN [Listener at localhost/33941] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-16 18:15:52,291 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x67ff84383b2d6eea: Processing first storage report for DS-c499a704-5e35-4d4a-b751-ff215efc7044 from datanode 144f6767-fb73-4624-be56-8470e5f45b15 2023-07-16 18:15:52,291 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x67ff84383b2d6eea: from storage DS-c499a704-5e35-4d4a-b751-ff215efc7044 node DatanodeRegistration(127.0.0.1:41335, datanodeUuid=144f6767-fb73-4624-be56-8470e5f45b15, infoPort=34507, infoSecurePort=0, ipcPort=33941, storageInfo=lv=-57;cid=testClusterID;nsid=1159153530;c=1689531351413), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-16 18:15:52,291 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x67ff84383b2d6eea: Processing first storage report for DS-735d08f4-a4ef-4543-b214-1cb2c7016dc0 from datanode 144f6767-fb73-4624-be56-8470e5f45b15 2023-07-16 18:15:52,291 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x67ff84383b2d6eea: from storage DS-735d08f4-a4ef-4543-b214-1cb2c7016dc0 node DatanodeRegistration(127.0.0.1:41335, datanodeUuid=144f6767-fb73-4624-be56-8470e5f45b15, infoPort=34507, infoSecurePort=0, ipcPort=33941, storageInfo=lv=-57;cid=testClusterID;nsid=1159153530;c=1689531351413), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-16 18:15:52,301 DEBUG [Listener at localhost/33941] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8ccd5c03-7dd0-0741-9334-1f163324ab0d 2023-07-16 18:15:52,304 INFO [Listener at localhost/33941] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8ccd5c03-7dd0-0741-9334-1f163324ab0d/cluster_3f1a6989-300d-fb2d-a4a0-f96584988f7d/zookeeper_0, clientPort=58951, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8ccd5c03-7dd0-0741-9334-1f163324ab0d/cluster_3f1a6989-300d-fb2d-a4a0-f96584988f7d/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8ccd5c03-7dd0-0741-9334-1f163324ab0d/cluster_3f1a6989-300d-fb2d-a4a0-f96584988f7d/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-07-16 18:15:52,306 INFO [Listener at localhost/33941] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=58951 2023-07-16 18:15:52,306 INFO [Listener at localhost/33941] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-16 18:15:52,307 INFO [Listener at localhost/33941] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-16 18:15:52,330 INFO [Listener at localhost/33941] util.FSUtils(471): Created version file at hdfs://localhost:40765/user/jenkins/test-data/9891f0a6-513d-b33f-4820-41cbcf26319e with version=8 2023-07-16 18:15:52,330 INFO [Listener at localhost/33941] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/hbase-staging 2023-07-16 18:15:52,331 DEBUG [Listener at localhost/33941] hbase.LocalHBaseCluster(134): Setting Master Port to random. 2023-07-16 18:15:52,331 DEBUG [Listener at localhost/33941] hbase.LocalHBaseCluster(141): Setting RegionServer Port to random. 2023-07-16 18:15:52,332 DEBUG [Listener at localhost/33941] hbase.LocalHBaseCluster(151): Setting RS InfoServer Port to random. 2023-07-16 18:15:52,332 DEBUG [Listener at localhost/33941] hbase.LocalHBaseCluster(159): Setting Master InfoServer Port to random. 2023-07-16 18:15:52,333 INFO [Listener at localhost/33941] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-07-16 18:15:52,333 INFO [Listener at localhost/33941] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-16 18:15:52,333 INFO [Listener at localhost/33941] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-16 18:15:52,333 INFO [Listener at localhost/33941] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-16 18:15:52,333 INFO [Listener at localhost/33941] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-16 18:15:52,334 INFO [Listener at localhost/33941] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-16 18:15:52,334 INFO [Listener at localhost/33941] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-16 18:15:52,335 INFO [Listener at localhost/33941] ipc.NettyRpcServer(120): Bind to /172.31.14.131:44131 2023-07-16 18:15:52,335 INFO [Listener at localhost/33941] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-16 18:15:52,337 INFO [Listener at localhost/33941] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-16 18:15:52,338 INFO [Listener at localhost/33941] zookeeper.RecoverableZooKeeper(93): Process identifier=master:44131 connecting to ZooKeeper ensemble=127.0.0.1:58951 2023-07-16 18:15:52,351 DEBUG [Listener at localhost/33941-EventThread] zookeeper.ZKWatcher(600): master:441310x0, quorum=127.0.0.1:58951, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-16 18:15:52,351 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:44131-0x1016f5907f20000 connected 2023-07-16 18:15:52,371 DEBUG [Listener at localhost/33941] zookeeper.ZKUtil(164): master:44131-0x1016f5907f20000, quorum=127.0.0.1:58951, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-16 18:15:52,371 DEBUG [Listener at localhost/33941] zookeeper.ZKUtil(164): master:44131-0x1016f5907f20000, quorum=127.0.0.1:58951, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-16 18:15:52,372 DEBUG [Listener at localhost/33941] zookeeper.ZKUtil(164): master:44131-0x1016f5907f20000, quorum=127.0.0.1:58951, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-16 18:15:52,374 DEBUG [Listener at localhost/33941] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=44131 2023-07-16 18:15:52,375 DEBUG [Listener at localhost/33941] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=44131 2023-07-16 18:15:52,375 DEBUG [Listener at localhost/33941] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=44131 2023-07-16 18:15:52,377 DEBUG [Listener at localhost/33941] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=44131 2023-07-16 18:15:52,378 DEBUG [Listener at localhost/33941] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=44131 2023-07-16 18:15:52,380 INFO [Listener at localhost/33941] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-16 18:15:52,380 INFO [Listener at localhost/33941] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-16 18:15:52,380 INFO [Listener at localhost/33941] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-16 18:15:52,380 INFO [Listener at localhost/33941] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-16 18:15:52,380 INFO [Listener at localhost/33941] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-16 18:15:52,380 INFO [Listener at localhost/33941] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-16 18:15:52,381 INFO [Listener at localhost/33941] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-16 18:15:52,381 INFO [Listener at localhost/33941] http.HttpServer(1146): Jetty bound to port 36989 2023-07-16 18:15:52,381 INFO [Listener at localhost/33941] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-16 18:15:52,387 INFO [Listener at localhost/33941] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 18:15:52,388 INFO [Listener at localhost/33941] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@780bfb43{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8ccd5c03-7dd0-0741-9334-1f163324ab0d/hadoop.log.dir/,AVAILABLE} 2023-07-16 18:15:52,388 INFO [Listener at localhost/33941] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 18:15:52,388 INFO [Listener at localhost/33941] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@2e75a497{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-16 18:15:52,503 INFO [Listener at localhost/33941] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-16 18:15:52,504 INFO [Listener at localhost/33941] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-16 18:15:52,504 INFO [Listener at localhost/33941] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-16 18:15:52,504 INFO [Listener at localhost/33941] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-16 18:15:52,506 INFO [Listener at localhost/33941] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 18:15:52,507 INFO [Listener at localhost/33941] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@2e164a3a{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8ccd5c03-7dd0-0741-9334-1f163324ab0d/java.io.tmpdir/jetty-0_0_0_0-36989-hbase-server-2_4_18-SNAPSHOT_jar-_-any-4908191720649487899/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-16 18:15:52,509 INFO [Listener at localhost/33941] server.AbstractConnector(333): Started ServerConnector@78016569{HTTP/1.1, (http/1.1)}{0.0.0.0:36989} 2023-07-16 18:15:52,509 INFO [Listener at localhost/33941] server.Server(415): Started @37288ms 2023-07-16 18:15:52,509 INFO [Listener at localhost/33941] master.HMaster(444): hbase.rootdir=hdfs://localhost:40765/user/jenkins/test-data/9891f0a6-513d-b33f-4820-41cbcf26319e, hbase.cluster.distributed=false 2023-07-16 18:15:52,529 INFO [Listener at localhost/33941] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-16 18:15:52,529 INFO [Listener at localhost/33941] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-16 18:15:52,529 INFO [Listener at localhost/33941] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-16 18:15:52,529 INFO [Listener at localhost/33941] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-16 18:15:52,529 INFO [Listener at localhost/33941] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-16 18:15:52,530 INFO [Listener at localhost/33941] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-16 18:15:52,530 INFO [Listener at localhost/33941] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-16 18:15:52,531 INFO [Listener at localhost/33941] ipc.NettyRpcServer(120): Bind to /172.31.14.131:39753 2023-07-16 18:15:52,531 INFO [Listener at localhost/33941] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-16 18:15:52,532 DEBUG [Listener at localhost/33941] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-16 18:15:52,533 INFO [Listener at localhost/33941] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-16 18:15:52,534 INFO [Listener at localhost/33941] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-16 18:15:52,535 INFO [Listener at localhost/33941] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:39753 connecting to ZooKeeper ensemble=127.0.0.1:58951 2023-07-16 18:15:52,539 DEBUG [Listener at localhost/33941-EventThread] zookeeper.ZKWatcher(600): regionserver:397530x0, quorum=127.0.0.1:58951, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-16 18:15:52,540 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:39753-0x1016f5907f20001 connected 2023-07-16 18:15:52,540 DEBUG [Listener at localhost/33941] zookeeper.ZKUtil(164): regionserver:39753-0x1016f5907f20001, quorum=127.0.0.1:58951, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-16 18:15:52,541 DEBUG [Listener at localhost/33941] zookeeper.ZKUtil(164): regionserver:39753-0x1016f5907f20001, quorum=127.0.0.1:58951, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-16 18:15:52,541 DEBUG [Listener at localhost/33941] zookeeper.ZKUtil(164): regionserver:39753-0x1016f5907f20001, quorum=127.0.0.1:58951, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-16 18:15:52,542 DEBUG [Listener at localhost/33941] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=39753 2023-07-16 18:15:52,543 DEBUG [Listener at localhost/33941] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=39753 2023-07-16 18:15:52,544 DEBUG [Listener at localhost/33941] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=39753 2023-07-16 18:15:52,549 DEBUG [Listener at localhost/33941] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=39753 2023-07-16 18:15:52,549 DEBUG [Listener at localhost/33941] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=39753 2023-07-16 18:15:52,552 INFO [Listener at localhost/33941] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-16 18:15:52,552 INFO [Listener at localhost/33941] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-16 18:15:52,552 INFO [Listener at localhost/33941] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-16 18:15:52,553 INFO [Listener at localhost/33941] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-16 18:15:52,553 INFO [Listener at localhost/33941] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-16 18:15:52,553 INFO [Listener at localhost/33941] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-16 18:15:52,553 INFO [Listener at localhost/33941] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-16 18:15:52,555 INFO [Listener at localhost/33941] http.HttpServer(1146): Jetty bound to port 44543 2023-07-16 18:15:52,555 INFO [Listener at localhost/33941] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-16 18:15:52,559 INFO [Listener at localhost/33941] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 18:15:52,559 INFO [Listener at localhost/33941] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@1cc8b87a{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8ccd5c03-7dd0-0741-9334-1f163324ab0d/hadoop.log.dir/,AVAILABLE} 2023-07-16 18:15:52,560 INFO [Listener at localhost/33941] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 18:15:52,560 INFO [Listener at localhost/33941] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@14dd322{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-16 18:15:52,684 INFO [Listener at localhost/33941] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-16 18:15:52,685 INFO [Listener at localhost/33941] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-16 18:15:52,685 INFO [Listener at localhost/33941] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-16 18:15:52,685 INFO [Listener at localhost/33941] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-16 18:15:52,687 INFO [Listener at localhost/33941] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 18:15:52,688 INFO [Listener at localhost/33941] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@19a55b52{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8ccd5c03-7dd0-0741-9334-1f163324ab0d/java.io.tmpdir/jetty-0_0_0_0-44543-hbase-server-2_4_18-SNAPSHOT_jar-_-any-5046938575529329859/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-16 18:15:52,689 INFO [Listener at localhost/33941] server.AbstractConnector(333): Started ServerConnector@62fe5a63{HTTP/1.1, (http/1.1)}{0.0.0.0:44543} 2023-07-16 18:15:52,690 INFO [Listener at localhost/33941] server.Server(415): Started @37469ms 2023-07-16 18:15:52,707 INFO [Listener at localhost/33941] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-16 18:15:52,707 INFO [Listener at localhost/33941] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-16 18:15:52,708 INFO [Listener at localhost/33941] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-16 18:15:52,708 INFO [Listener at localhost/33941] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-16 18:15:52,708 INFO [Listener at localhost/33941] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-16 18:15:52,708 INFO [Listener at localhost/33941] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-16 18:15:52,708 INFO [Listener at localhost/33941] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-16 18:15:52,709 INFO [Listener at localhost/33941] ipc.NettyRpcServer(120): Bind to /172.31.14.131:36093 2023-07-16 18:15:52,709 INFO [Listener at localhost/33941] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-16 18:15:52,711 DEBUG [Listener at localhost/33941] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-16 18:15:52,711 INFO [Listener at localhost/33941] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-16 18:15:52,713 INFO [Listener at localhost/33941] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-16 18:15:52,714 INFO [Listener at localhost/33941] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:36093 connecting to ZooKeeper ensemble=127.0.0.1:58951 2023-07-16 18:15:52,719 DEBUG [Listener at localhost/33941-EventThread] zookeeper.ZKWatcher(600): regionserver:360930x0, quorum=127.0.0.1:58951, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-16 18:15:52,721 DEBUG [Listener at localhost/33941] zookeeper.ZKUtil(164): regionserver:360930x0, quorum=127.0.0.1:58951, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-16 18:15:52,721 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:36093-0x1016f5907f20002 connected 2023-07-16 18:15:52,721 DEBUG [Listener at localhost/33941] zookeeper.ZKUtil(164): regionserver:36093-0x1016f5907f20002, quorum=127.0.0.1:58951, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-16 18:15:52,722 DEBUG [Listener at localhost/33941] zookeeper.ZKUtil(164): regionserver:36093-0x1016f5907f20002, quorum=127.0.0.1:58951, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-16 18:15:52,722 DEBUG [Listener at localhost/33941] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=36093 2023-07-16 18:15:52,723 DEBUG [Listener at localhost/33941] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=36093 2023-07-16 18:15:52,726 DEBUG [Listener at localhost/33941] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=36093 2023-07-16 18:15:52,727 DEBUG [Listener at localhost/33941] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=36093 2023-07-16 18:15:52,728 DEBUG [Listener at localhost/33941] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=36093 2023-07-16 18:15:52,730 INFO [Listener at localhost/33941] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-16 18:15:52,730 INFO [Listener at localhost/33941] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-16 18:15:52,730 INFO [Listener at localhost/33941] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-16 18:15:52,730 INFO [Listener at localhost/33941] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-16 18:15:52,730 INFO [Listener at localhost/33941] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-16 18:15:52,730 INFO [Listener at localhost/33941] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-16 18:15:52,731 INFO [Listener at localhost/33941] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-16 18:15:52,731 INFO [Listener at localhost/33941] http.HttpServer(1146): Jetty bound to port 41031 2023-07-16 18:15:52,731 INFO [Listener at localhost/33941] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-16 18:15:52,738 INFO [Listener at localhost/33941] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 18:15:52,738 INFO [Listener at localhost/33941] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@1ac8173c{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8ccd5c03-7dd0-0741-9334-1f163324ab0d/hadoop.log.dir/,AVAILABLE} 2023-07-16 18:15:52,738 INFO [Listener at localhost/33941] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 18:15:52,739 INFO [Listener at localhost/33941] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@7b6d1808{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-16 18:15:52,853 INFO [Listener at localhost/33941] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-16 18:15:52,854 INFO [Listener at localhost/33941] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-16 18:15:52,854 INFO [Listener at localhost/33941] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-16 18:15:52,854 INFO [Listener at localhost/33941] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-16 18:15:52,855 INFO [Listener at localhost/33941] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 18:15:52,856 INFO [Listener at localhost/33941] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@44267f0{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8ccd5c03-7dd0-0741-9334-1f163324ab0d/java.io.tmpdir/jetty-0_0_0_0-41031-hbase-server-2_4_18-SNAPSHOT_jar-_-any-7012679358671448753/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-16 18:15:52,857 INFO [Listener at localhost/33941] server.AbstractConnector(333): Started ServerConnector@82d2458{HTTP/1.1, (http/1.1)}{0.0.0.0:41031} 2023-07-16 18:15:52,857 INFO [Listener at localhost/33941] server.Server(415): Started @37636ms 2023-07-16 18:15:52,868 INFO [Listener at localhost/33941] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-16 18:15:52,868 INFO [Listener at localhost/33941] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-16 18:15:52,868 INFO [Listener at localhost/33941] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-16 18:15:52,869 INFO [Listener at localhost/33941] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-16 18:15:52,869 INFO [Listener at localhost/33941] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-16 18:15:52,869 INFO [Listener at localhost/33941] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-16 18:15:52,869 INFO [Listener at localhost/33941] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-16 18:15:52,869 INFO [Listener at localhost/33941] ipc.NettyRpcServer(120): Bind to /172.31.14.131:34637 2023-07-16 18:15:52,870 INFO [Listener at localhost/33941] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-16 18:15:52,871 DEBUG [Listener at localhost/33941] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-16 18:15:52,872 INFO [Listener at localhost/33941] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-16 18:15:52,873 INFO [Listener at localhost/33941] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-16 18:15:52,873 INFO [Listener at localhost/33941] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:34637 connecting to ZooKeeper ensemble=127.0.0.1:58951 2023-07-16 18:15:52,877 DEBUG [Listener at localhost/33941-EventThread] zookeeper.ZKWatcher(600): regionserver:346370x0, quorum=127.0.0.1:58951, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-16 18:15:52,879 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:34637-0x1016f5907f20003 connected 2023-07-16 18:15:52,879 DEBUG [Listener at localhost/33941] zookeeper.ZKUtil(164): regionserver:34637-0x1016f5907f20003, quorum=127.0.0.1:58951, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-16 18:15:52,879 DEBUG [Listener at localhost/33941] zookeeper.ZKUtil(164): regionserver:34637-0x1016f5907f20003, quorum=127.0.0.1:58951, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-16 18:15:52,882 DEBUG [Listener at localhost/33941] zookeeper.ZKUtil(164): regionserver:34637-0x1016f5907f20003, quorum=127.0.0.1:58951, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-16 18:15:52,883 DEBUG [Listener at localhost/33941] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=34637 2023-07-16 18:15:52,883 DEBUG [Listener at localhost/33941] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=34637 2023-07-16 18:15:52,887 DEBUG [Listener at localhost/33941] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=34637 2023-07-16 18:15:52,888 DEBUG [Listener at localhost/33941] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=34637 2023-07-16 18:15:52,888 DEBUG [Listener at localhost/33941] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=34637 2023-07-16 18:15:52,890 INFO [Listener at localhost/33941] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-16 18:15:52,890 INFO [Listener at localhost/33941] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-16 18:15:52,890 INFO [Listener at localhost/33941] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-16 18:15:52,891 INFO [Listener at localhost/33941] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-16 18:15:52,891 INFO [Listener at localhost/33941] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-16 18:15:52,891 INFO [Listener at localhost/33941] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-16 18:15:52,891 INFO [Listener at localhost/33941] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-16 18:15:52,892 INFO [Listener at localhost/33941] http.HttpServer(1146): Jetty bound to port 40165 2023-07-16 18:15:52,892 INFO [Listener at localhost/33941] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-16 18:15:52,895 INFO [Listener at localhost/33941] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 18:15:52,896 INFO [Listener at localhost/33941] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@1782cf7b{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8ccd5c03-7dd0-0741-9334-1f163324ab0d/hadoop.log.dir/,AVAILABLE} 2023-07-16 18:15:52,896 INFO [Listener at localhost/33941] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 18:15:52,896 INFO [Listener at localhost/33941] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@6a2ebbcc{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-16 18:15:53,014 INFO [Listener at localhost/33941] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-16 18:15:53,015 INFO [Listener at localhost/33941] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-16 18:15:53,016 INFO [Listener at localhost/33941] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-16 18:15:53,016 INFO [Listener at localhost/33941] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-16 18:15:53,017 INFO [Listener at localhost/33941] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 18:15:53,018 INFO [Listener at localhost/33941] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@5c5ad983{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8ccd5c03-7dd0-0741-9334-1f163324ab0d/java.io.tmpdir/jetty-0_0_0_0-40165-hbase-server-2_4_18-SNAPSHOT_jar-_-any-2428747684210766197/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-16 18:15:53,019 INFO [Listener at localhost/33941] server.AbstractConnector(333): Started ServerConnector@789ac99f{HTTP/1.1, (http/1.1)}{0.0.0.0:40165} 2023-07-16 18:15:53,020 INFO [Listener at localhost/33941] server.Server(415): Started @37799ms 2023-07-16 18:15:53,022 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-16 18:15:53,028 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@d6d2719{HTTP/1.1, (http/1.1)}{0.0.0.0:37985} 2023-07-16 18:15:53,028 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(415): Started @37808ms 2023-07-16 18:15:53,028 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,44131,1689531352332 2023-07-16 18:15:53,030 DEBUG [Listener at localhost/33941-EventThread] zookeeper.ZKWatcher(600): master:44131-0x1016f5907f20000, quorum=127.0.0.1:58951, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-16 18:15:53,030 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:44131-0x1016f5907f20000, quorum=127.0.0.1:58951, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,44131,1689531352332 2023-07-16 18:15:53,032 DEBUG [Listener at localhost/33941-EventThread] zookeeper.ZKWatcher(600): master:44131-0x1016f5907f20000, quorum=127.0.0.1:58951, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-16 18:15:53,032 DEBUG [Listener at localhost/33941-EventThread] zookeeper.ZKWatcher(600): regionserver:36093-0x1016f5907f20002, quorum=127.0.0.1:58951, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-16 18:15:53,032 DEBUG [Listener at localhost/33941-EventThread] zookeeper.ZKWatcher(600): regionserver:34637-0x1016f5907f20003, quorum=127.0.0.1:58951, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-16 18:15:53,032 DEBUG [Listener at localhost/33941-EventThread] zookeeper.ZKWatcher(600): regionserver:39753-0x1016f5907f20001, quorum=127.0.0.1:58951, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-16 18:15:53,032 DEBUG [Listener at localhost/33941-EventThread] zookeeper.ZKWatcher(600): master:44131-0x1016f5907f20000, quorum=127.0.0.1:58951, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-16 18:15:53,034 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:44131-0x1016f5907f20000, quorum=127.0.0.1:58951, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-16 18:15:53,036 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,44131,1689531352332 from backup master directory 2023-07-16 18:15:53,036 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:44131-0x1016f5907f20000, quorum=127.0.0.1:58951, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-16 18:15:53,039 DEBUG [Listener at localhost/33941-EventThread] zookeeper.ZKWatcher(600): master:44131-0x1016f5907f20000, quorum=127.0.0.1:58951, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,44131,1689531352332 2023-07-16 18:15:53,039 DEBUG [Listener at localhost/33941-EventThread] zookeeper.ZKWatcher(600): master:44131-0x1016f5907f20000, quorum=127.0.0.1:58951, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-16 18:15:53,039 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-16 18:15:53,039 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,44131,1689531352332 2023-07-16 18:15:53,058 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:40765/user/jenkins/test-data/9891f0a6-513d-b33f-4820-41cbcf26319e/hbase.id with ID: e0307dfd-c354-4147-8604-f4786f09e3a5 2023-07-16 18:15:53,071 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-16 18:15:53,074 DEBUG [Listener at localhost/33941-EventThread] zookeeper.ZKWatcher(600): master:44131-0x1016f5907f20000, quorum=127.0.0.1:58951, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-16 18:15:53,088 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x5baa060a to 127.0.0.1:58951 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-16 18:15:53,092 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1e9b6906, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-16 18:15:53,092 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-16 18:15:53,092 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-16 18:15:53,093 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-16 18:15:53,094 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:40765/user/jenkins/test-data/9891f0a6-513d-b33f-4820-41cbcf26319e/MasterData/data/master/store-tmp 2023-07-16 18:15:53,107 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 18:15:53,107 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-16 18:15:53,107 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-16 18:15:53,107 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-16 18:15:53,107 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-16 18:15:53,107 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-16 18:15:53,107 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-16 18:15:53,107 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-16 18:15:53,108 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:40765/user/jenkins/test-data/9891f0a6-513d-b33f-4820-41cbcf26319e/MasterData/WALs/jenkins-hbase4.apache.org,44131,1689531352332 2023-07-16 18:15:53,110 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C44131%2C1689531352332, suffix=, logDir=hdfs://localhost:40765/user/jenkins/test-data/9891f0a6-513d-b33f-4820-41cbcf26319e/MasterData/WALs/jenkins-hbase4.apache.org,44131,1689531352332, archiveDir=hdfs://localhost:40765/user/jenkins/test-data/9891f0a6-513d-b33f-4820-41cbcf26319e/MasterData/oldWALs, maxLogs=10 2023-07-16 18:15:53,124 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43791,DS-29c49014-e67d-4232-82d5-0b72dfba0857,DISK] 2023-07-16 18:15:53,125 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41335,DS-c499a704-5e35-4d4a-b751-ff215efc7044,DISK] 2023-07-16 18:15:53,126 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34759,DS-695eb77d-ee54-4846-805d-18783915c745,DISK] 2023-07-16 18:15:53,131 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/9891f0a6-513d-b33f-4820-41cbcf26319e/MasterData/WALs/jenkins-hbase4.apache.org,44131,1689531352332/jenkins-hbase4.apache.org%2C44131%2C1689531352332.1689531353110 2023-07-16 18:15:53,131 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:43791,DS-29c49014-e67d-4232-82d5-0b72dfba0857,DISK], DatanodeInfoWithStorage[127.0.0.1:41335,DS-c499a704-5e35-4d4a-b751-ff215efc7044,DISK], DatanodeInfoWithStorage[127.0.0.1:34759,DS-695eb77d-ee54-4846-805d-18783915c745,DISK]] 2023-07-16 18:15:53,131 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-16 18:15:53,131 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 18:15:53,131 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-16 18:15:53,131 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-16 18:15:53,133 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-16 18:15:53,135 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40765/user/jenkins/test-data/9891f0a6-513d-b33f-4820-41cbcf26319e/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-16 18:15:53,135 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-16 18:15:53,135 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 18:15:53,136 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40765/user/jenkins/test-data/9891f0a6-513d-b33f-4820-41cbcf26319e/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-16 18:15:53,136 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40765/user/jenkins/test-data/9891f0a6-513d-b33f-4820-41cbcf26319e/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-16 18:15:53,139 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-16 18:15:53,141 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40765/user/jenkins/test-data/9891f0a6-513d-b33f-4820-41cbcf26319e/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-16 18:15:53,141 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10162166560, jitterRate=-0.05357448756694794}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 18:15:53,141 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-16 18:15:53,142 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-16 18:15:53,143 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-16 18:15:53,143 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-16 18:15:53,143 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-16 18:15:53,143 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-07-16 18:15:53,144 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-07-16 18:15:53,144 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-16 18:15:53,144 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-07-16 18:15:53,145 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-07-16 18:15:53,146 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:44131-0x1016f5907f20000, quorum=127.0.0.1:58951, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-07-16 18:15:53,146 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-16 18:15:53,147 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:44131-0x1016f5907f20000, quorum=127.0.0.1:58951, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-16 18:15:53,149 DEBUG [Listener at localhost/33941-EventThread] zookeeper.ZKWatcher(600): master:44131-0x1016f5907f20000, quorum=127.0.0.1:58951, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-16 18:15:53,149 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:44131-0x1016f5907f20000, quorum=127.0.0.1:58951, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-16 18:15:53,149 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:44131-0x1016f5907f20000, quorum=127.0.0.1:58951, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-16 18:15:53,151 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:44131-0x1016f5907f20000, quorum=127.0.0.1:58951, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-16 18:15:53,153 DEBUG [Listener at localhost/33941-EventThread] zookeeper.ZKWatcher(600): regionserver:36093-0x1016f5907f20002, quorum=127.0.0.1:58951, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-16 18:15:53,153 DEBUG [Listener at localhost/33941-EventThread] zookeeper.ZKWatcher(600): regionserver:34637-0x1016f5907f20003, quorum=127.0.0.1:58951, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-16 18:15:53,153 DEBUG [Listener at localhost/33941-EventThread] zookeeper.ZKWatcher(600): master:44131-0x1016f5907f20000, quorum=127.0.0.1:58951, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-16 18:15:53,154 DEBUG [Listener at localhost/33941-EventThread] zookeeper.ZKWatcher(600): regionserver:39753-0x1016f5907f20001, quorum=127.0.0.1:58951, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-16 18:15:53,154 DEBUG [Listener at localhost/33941-EventThread] zookeeper.ZKWatcher(600): master:44131-0x1016f5907f20000, quorum=127.0.0.1:58951, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-16 18:15:53,156 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,44131,1689531352332, sessionid=0x1016f5907f20000, setting cluster-up flag (Was=false) 2023-07-16 18:15:53,161 DEBUG [Listener at localhost/33941-EventThread] zookeeper.ZKWatcher(600): master:44131-0x1016f5907f20000, quorum=127.0.0.1:58951, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-16 18:15:53,167 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-16 18:15:53,168 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,44131,1689531352332 2023-07-16 18:15:53,171 DEBUG [Listener at localhost/33941-EventThread] zookeeper.ZKWatcher(600): master:44131-0x1016f5907f20000, quorum=127.0.0.1:58951, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-16 18:15:53,178 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-16 18:15:53,179 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,44131,1689531352332 2023-07-16 18:15:53,180 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:40765/user/jenkins/test-data/9891f0a6-513d-b33f-4820-41cbcf26319e/.hbase-snapshot/.tmp 2023-07-16 18:15:53,182 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2930): Registered master coprocessor service: service=RSGroupAdminService 2023-07-16 18:15:53,182 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(537): Refreshing in Offline mode. 2023-07-16 18:15:53,184 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint loaded, priority=536870911. 2023-07-16 18:15:53,185 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,44131,1689531352332] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-16 18:15:53,185 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver loaded, priority=536870912. 2023-07-16 18:15:53,185 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.quotas.MasterQuotasObserver loaded, priority=536870913. 2023-07-16 18:15:53,187 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-07-16 18:15:53,197 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-16 18:15:53,197 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-16 18:15:53,198 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-16 18:15:53,198 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-16 18:15:53,198 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-16 18:15:53,198 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-16 18:15:53,198 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-16 18:15:53,198 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-16 18:15:53,198 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-07-16 18:15:53,198 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 18:15:53,198 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-16 18:15:53,198 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 18:15:53,199 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1689531383199 2023-07-16 18:15:53,200 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-16 18:15:53,200 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-16 18:15:53,200 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-16 18:15:53,200 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-16 18:15:53,200 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-16 18:15:53,200 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-16 18:15:53,200 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-16 18:15:53,201 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-07-16 18:15:53,201 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-16 18:15:53,201 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-07-16 18:15:53,202 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-16 18:15:53,202 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-16 18:15:53,202 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-16 18:15:53,202 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-16 18:15:53,202 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689531353202,5,FailOnTimeoutGroup] 2023-07-16 18:15:53,202 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689531353202,5,FailOnTimeoutGroup] 2023-07-16 18:15:53,203 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-16 18:15:53,203 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-16 18:15:53,203 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-16 18:15:53,203 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-16 18:15:53,203 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-16 18:15:53,216 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:40765/user/jenkins/test-data/9891f0a6-513d-b33f-4820-41cbcf26319e/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-16 18:15:53,217 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:40765/user/jenkins/test-data/9891f0a6-513d-b33f-4820-41cbcf26319e/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-16 18:15:53,217 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40765/user/jenkins/test-data/9891f0a6-513d-b33f-4820-41cbcf26319e 2023-07-16 18:15:53,224 INFO [RS:1;jenkins-hbase4:36093] regionserver.HRegionServer(951): ClusterId : e0307dfd-c354-4147-8604-f4786f09e3a5 2023-07-16 18:15:53,224 INFO [RS:2;jenkins-hbase4:34637] regionserver.HRegionServer(951): ClusterId : e0307dfd-c354-4147-8604-f4786f09e3a5 2023-07-16 18:15:53,225 INFO [RS:0;jenkins-hbase4:39753] regionserver.HRegionServer(951): ClusterId : e0307dfd-c354-4147-8604-f4786f09e3a5 2023-07-16 18:15:53,225 DEBUG [RS:1;jenkins-hbase4:36093] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-16 18:15:53,227 DEBUG [RS:2;jenkins-hbase4:34637] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-16 18:15:53,227 DEBUG [RS:0;jenkins-hbase4:39753] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-16 18:15:53,230 DEBUG [RS:1;jenkins-hbase4:36093] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-16 18:15:53,230 DEBUG [RS:1;jenkins-hbase4:36093] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-16 18:15:53,230 DEBUG [RS:2;jenkins-hbase4:34637] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-16 18:15:53,230 DEBUG [RS:2;jenkins-hbase4:34637] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-16 18:15:53,230 DEBUG [RS:0;jenkins-hbase4:39753] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-16 18:15:53,230 DEBUG [RS:0;jenkins-hbase4:39753] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-16 18:15:53,232 DEBUG [RS:1;jenkins-hbase4:36093] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-16 18:15:53,232 DEBUG [RS:0;jenkins-hbase4:39753] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-16 18:15:53,232 DEBUG [RS:2;jenkins-hbase4:34637] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-16 18:15:53,239 DEBUG [RS:1;jenkins-hbase4:36093] zookeeper.ReadOnlyZKClient(139): Connect 0x6decd915 to 127.0.0.1:58951 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-16 18:15:53,239 DEBUG [RS:2;jenkins-hbase4:34637] zookeeper.ReadOnlyZKClient(139): Connect 0x11dcc035 to 127.0.0.1:58951 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-16 18:15:53,239 DEBUG [RS:0;jenkins-hbase4:39753] zookeeper.ReadOnlyZKClient(139): Connect 0x74cca31d to 127.0.0.1:58951 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-16 18:15:53,247 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 18:15:53,252 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-16 18:15:53,255 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40765/user/jenkins/test-data/9891f0a6-513d-b33f-4820-41cbcf26319e/data/hbase/meta/1588230740/info 2023-07-16 18:15:53,256 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-16 18:15:53,256 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 18:15:53,257 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-16 18:15:53,258 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40765/user/jenkins/test-data/9891f0a6-513d-b33f-4820-41cbcf26319e/data/hbase/meta/1588230740/rep_barrier 2023-07-16 18:15:53,258 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-16 18:15:53,259 DEBUG [RS:1;jenkins-hbase4:36093] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@632b3ad8, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-16 18:15:53,259 DEBUG [RS:0;jenkins-hbase4:39753] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7a0b43f4, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-16 18:15:53,259 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 18:15:53,260 DEBUG [RS:0;jenkins-hbase4:39753] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@3b3649cf, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-16 18:15:53,260 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-16 18:15:53,260 DEBUG [RS:1;jenkins-hbase4:36093] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@221630e3, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-16 18:15:53,261 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40765/user/jenkins/test-data/9891f0a6-513d-b33f-4820-41cbcf26319e/data/hbase/meta/1588230740/table 2023-07-16 18:15:53,261 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-16 18:15:53,262 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 18:15:53,263 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40765/user/jenkins/test-data/9891f0a6-513d-b33f-4820-41cbcf26319e/data/hbase/meta/1588230740 2023-07-16 18:15:53,263 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40765/user/jenkins/test-data/9891f0a6-513d-b33f-4820-41cbcf26319e/data/hbase/meta/1588230740 2023-07-16 18:15:53,266 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-16 18:15:53,267 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-16 18:15:53,270 DEBUG [RS:0;jenkins-hbase4:39753] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:39753 2023-07-16 18:15:53,270 INFO [RS:0;jenkins-hbase4:39753] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-16 18:15:53,270 INFO [RS:0;jenkins-hbase4:39753] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-16 18:15:53,270 DEBUG [RS:0;jenkins-hbase4:39753] regionserver.HRegionServer(1022): About to register with Master. 2023-07-16 18:15:53,270 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40765/user/jenkins/test-data/9891f0a6-513d-b33f-4820-41cbcf26319e/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-16 18:15:53,270 INFO [RS:0;jenkins-hbase4:39753] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,44131,1689531352332 with isa=jenkins-hbase4.apache.org/172.31.14.131:39753, startcode=1689531352528 2023-07-16 18:15:53,271 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10007612320, jitterRate=-0.0679684728384018}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-16 18:15:53,271 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-16 18:15:53,271 DEBUG [RS:0;jenkins-hbase4:39753] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-16 18:15:53,271 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-16 18:15:53,271 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-16 18:15:53,271 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-16 18:15:53,271 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-16 18:15:53,271 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-16 18:15:53,271 DEBUG [RS:2;jenkins-hbase4:34637] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1f1b2c7e, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-16 18:15:53,271 DEBUG [RS:2;jenkins-hbase4:34637] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1b24ab45, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-16 18:15:53,272 DEBUG [RS:1;jenkins-hbase4:36093] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase4:36093 2023-07-16 18:15:53,272 INFO [RS:1;jenkins-hbase4:36093] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-16 18:15:53,272 INFO [RS:1;jenkins-hbase4:36093] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-16 18:15:53,272 DEBUG [RS:1;jenkins-hbase4:36093] regionserver.HRegionServer(1022): About to register with Master. 2023-07-16 18:15:53,273 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-16 18:15:53,273 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-16 18:15:53,274 INFO [RS:1;jenkins-hbase4:36093] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,44131,1689531352332 with isa=jenkins-hbase4.apache.org/172.31.14.131:36093, startcode=1689531352707 2023-07-16 18:15:53,274 INFO [RS-EventLoopGroup-8-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:58973, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.4 (auth:SIMPLE), service=RegionServerStatusService 2023-07-16 18:15:53,274 DEBUG [RS:1;jenkins-hbase4:36093] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-16 18:15:53,276 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=44131] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,39753,1689531352528 2023-07-16 18:15:53,276 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,44131,1689531352332] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-16 18:15:53,276 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,44131,1689531352332] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-16 18:15:53,277 DEBUG [RS:0;jenkins-hbase4:39753] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:40765/user/jenkins/test-data/9891f0a6-513d-b33f-4820-41cbcf26319e 2023-07-16 18:15:53,277 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-07-16 18:15:53,277 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-07-16 18:15:53,277 DEBUG [RS:0;jenkins-hbase4:39753] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:40765 2023-07-16 18:15:53,277 DEBUG [RS:0;jenkins-hbase4:39753] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=36989 2023-07-16 18:15:53,277 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-16 18:15:53,278 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-16 18:15:53,279 DEBUG [Listener at localhost/33941-EventThread] zookeeper.ZKWatcher(600): master:44131-0x1016f5907f20000, quorum=127.0.0.1:58951, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-16 18:15:53,280 DEBUG [RS:0;jenkins-hbase4:39753] zookeeper.ZKUtil(162): regionserver:39753-0x1016f5907f20001, quorum=127.0.0.1:58951, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39753,1689531352528 2023-07-16 18:15:53,280 INFO [RS-EventLoopGroup-8-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:56675, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.5 (auth:SIMPLE), service=RegionServerStatusService 2023-07-16 18:15:53,280 WARN [RS:0;jenkins-hbase4:39753] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-16 18:15:53,280 INFO [RS:0;jenkins-hbase4:39753] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-16 18:15:53,280 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=44131] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,36093,1689531352707 2023-07-16 18:15:53,280 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,39753,1689531352528] 2023-07-16 18:15:53,280 DEBUG [RS:0;jenkins-hbase4:39753] regionserver.HRegionServer(1948): logDir=hdfs://localhost:40765/user/jenkins/test-data/9891f0a6-513d-b33f-4820-41cbcf26319e/WALs/jenkins-hbase4.apache.org,39753,1689531352528 2023-07-16 18:15:53,280 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-07-16 18:15:53,280 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,44131,1689531352332] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-16 18:15:53,280 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,44131,1689531352332] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 2 2023-07-16 18:15:53,280 DEBUG [RS:1;jenkins-hbase4:36093] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:40765/user/jenkins/test-data/9891f0a6-513d-b33f-4820-41cbcf26319e 2023-07-16 18:15:53,281 DEBUG [RS:1;jenkins-hbase4:36093] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:40765 2023-07-16 18:15:53,281 DEBUG [RS:1;jenkins-hbase4:36093] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=36989 2023-07-16 18:15:53,285 DEBUG [Listener at localhost/33941-EventThread] zookeeper.ZKWatcher(600): master:44131-0x1016f5907f20000, quorum=127.0.0.1:58951, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-16 18:15:53,285 DEBUG [RS:1;jenkins-hbase4:36093] zookeeper.ZKUtil(162): regionserver:36093-0x1016f5907f20002, quorum=127.0.0.1:58951, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36093,1689531352707 2023-07-16 18:15:53,285 WARN [RS:1;jenkins-hbase4:36093] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-16 18:15:53,285 INFO [RS:1;jenkins-hbase4:36093] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-16 18:15:53,285 DEBUG [RS:1;jenkins-hbase4:36093] regionserver.HRegionServer(1948): logDir=hdfs://localhost:40765/user/jenkins/test-data/9891f0a6-513d-b33f-4820-41cbcf26319e/WALs/jenkins-hbase4.apache.org,36093,1689531352707 2023-07-16 18:15:53,286 DEBUG [RS:0;jenkins-hbase4:39753] zookeeper.ZKUtil(162): regionserver:39753-0x1016f5907f20001, quorum=127.0.0.1:58951, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36093,1689531352707 2023-07-16 18:15:53,286 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,36093,1689531352707] 2023-07-16 18:15:53,286 DEBUG [RS:0;jenkins-hbase4:39753] zookeeper.ZKUtil(162): regionserver:39753-0x1016f5907f20001, quorum=127.0.0.1:58951, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39753,1689531352528 2023-07-16 18:15:53,288 DEBUG [RS:0;jenkins-hbase4:39753] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-16 18:15:53,289 INFO [RS:0;jenkins-hbase4:39753] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-16 18:15:53,289 DEBUG [RS:2;jenkins-hbase4:34637] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase4:34637 2023-07-16 18:15:53,289 INFO [RS:2;jenkins-hbase4:34637] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-16 18:15:53,289 INFO [RS:2;jenkins-hbase4:34637] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-16 18:15:53,289 DEBUG [RS:2;jenkins-hbase4:34637] regionserver.HRegionServer(1022): About to register with Master. 2023-07-16 18:15:53,290 INFO [RS:2;jenkins-hbase4:34637] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,44131,1689531352332 with isa=jenkins-hbase4.apache.org/172.31.14.131:34637, startcode=1689531352868 2023-07-16 18:15:53,290 DEBUG [RS:1;jenkins-hbase4:36093] zookeeper.ZKUtil(162): regionserver:36093-0x1016f5907f20002, quorum=127.0.0.1:58951, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36093,1689531352707 2023-07-16 18:15:53,290 DEBUG [RS:2;jenkins-hbase4:34637] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-16 18:15:53,290 DEBUG [RS:1;jenkins-hbase4:36093] zookeeper.ZKUtil(162): regionserver:36093-0x1016f5907f20002, quorum=127.0.0.1:58951, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39753,1689531352528 2023-07-16 18:15:53,291 INFO [RS:0;jenkins-hbase4:39753] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-16 18:15:53,291 INFO [RS-EventLoopGroup-8-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:59283, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.6 (auth:SIMPLE), service=RegionServerStatusService 2023-07-16 18:15:53,291 INFO [RS:0;jenkins-hbase4:39753] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-16 18:15:53,292 INFO [RS:0;jenkins-hbase4:39753] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-16 18:15:53,292 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=44131] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,34637,1689531352868 2023-07-16 18:15:53,292 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,44131,1689531352332] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-16 18:15:53,292 INFO [RS:0;jenkins-hbase4:39753] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-16 18:15:53,292 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,44131,1689531352332] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-16 18:15:53,293 DEBUG [RS:2;jenkins-hbase4:34637] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:40765/user/jenkins/test-data/9891f0a6-513d-b33f-4820-41cbcf26319e 2023-07-16 18:15:53,293 DEBUG [RS:2;jenkins-hbase4:34637] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:40765 2023-07-16 18:15:53,293 DEBUG [RS:2;jenkins-hbase4:34637] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=36989 2023-07-16 18:15:53,294 INFO [RS:0;jenkins-hbase4:39753] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-16 18:15:53,294 DEBUG [RS:0;jenkins-hbase4:39753] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 18:15:53,294 DEBUG [RS:0;jenkins-hbase4:39753] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 18:15:53,294 DEBUG [RS:0;jenkins-hbase4:39753] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 18:15:53,294 DEBUG [RS:0;jenkins-hbase4:39753] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 18:15:53,294 DEBUG [RS:0;jenkins-hbase4:39753] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 18:15:53,294 DEBUG [RS:0;jenkins-hbase4:39753] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-16 18:15:53,294 DEBUG [RS:0;jenkins-hbase4:39753] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 18:15:53,294 DEBUG [RS:0;jenkins-hbase4:39753] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 18:15:53,294 DEBUG [RS:0;jenkins-hbase4:39753] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 18:15:53,294 DEBUG [RS:0;jenkins-hbase4:39753] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 18:15:53,295 DEBUG [Listener at localhost/33941-EventThread] zookeeper.ZKWatcher(600): regionserver:39753-0x1016f5907f20001, quorum=127.0.0.1:58951, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-16 18:15:53,295 DEBUG [Listener at localhost/33941-EventThread] zookeeper.ZKWatcher(600): master:44131-0x1016f5907f20000, quorum=127.0.0.1:58951, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-16 18:15:53,295 DEBUG [Listener at localhost/33941-EventThread] zookeeper.ZKWatcher(600): regionserver:36093-0x1016f5907f20002, quorum=127.0.0.1:58951, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-16 18:15:53,295 DEBUG [RS:1;jenkins-hbase4:36093] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-16 18:15:53,296 INFO [RS:1;jenkins-hbase4:36093] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-16 18:15:53,297 DEBUG [RS:2;jenkins-hbase4:34637] zookeeper.ZKUtil(162): regionserver:34637-0x1016f5907f20003, quorum=127.0.0.1:58951, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34637,1689531352868 2023-07-16 18:15:53,297 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,34637,1689531352868] 2023-07-16 18:15:53,297 WARN [RS:2;jenkins-hbase4:34637] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-16 18:15:53,297 INFO [RS:2;jenkins-hbase4:34637] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-16 18:15:53,297 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:36093-0x1016f5907f20002, quorum=127.0.0.1:58951, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36093,1689531352707 2023-07-16 18:15:53,297 DEBUG [RS:2;jenkins-hbase4:34637] regionserver.HRegionServer(1948): logDir=hdfs://localhost:40765/user/jenkins/test-data/9891f0a6-513d-b33f-4820-41cbcf26319e/WALs/jenkins-hbase4.apache.org,34637,1689531352868 2023-07-16 18:15:53,297 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:39753-0x1016f5907f20001, quorum=127.0.0.1:58951, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36093,1689531352707 2023-07-16 18:15:53,297 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:36093-0x1016f5907f20002, quorum=127.0.0.1:58951, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39753,1689531352528 2023-07-16 18:15:53,297 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:39753-0x1016f5907f20001, quorum=127.0.0.1:58951, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39753,1689531352528 2023-07-16 18:15:53,298 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:36093-0x1016f5907f20002, quorum=127.0.0.1:58951, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34637,1689531352868 2023-07-16 18:15:53,298 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:39753-0x1016f5907f20001, quorum=127.0.0.1:58951, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34637,1689531352868 2023-07-16 18:15:53,298 INFO [RS:0;jenkins-hbase4:39753] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-16 18:15:53,298 INFO [RS:0;jenkins-hbase4:39753] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-16 18:15:53,299 INFO [RS:0;jenkins-hbase4:39753] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-16 18:15:53,299 INFO [RS:1;jenkins-hbase4:36093] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-16 18:15:53,299 INFO [RS:0;jenkins-hbase4:39753] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-16 18:15:53,303 INFO [RS:1;jenkins-hbase4:36093] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-16 18:15:53,303 INFO [RS:1;jenkins-hbase4:36093] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-16 18:15:53,304 INFO [RS:1;jenkins-hbase4:36093] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-16 18:15:53,306 INFO [RS:1;jenkins-hbase4:36093] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-16 18:15:53,306 DEBUG [RS:1;jenkins-hbase4:36093] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 18:15:53,306 DEBUG [RS:1;jenkins-hbase4:36093] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 18:15:53,306 DEBUG [RS:1;jenkins-hbase4:36093] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 18:15:53,306 DEBUG [RS:1;jenkins-hbase4:36093] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 18:15:53,307 DEBUG [RS:1;jenkins-hbase4:36093] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 18:15:53,307 DEBUG [RS:1;jenkins-hbase4:36093] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-16 18:15:53,307 DEBUG [RS:1;jenkins-hbase4:36093] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 18:15:53,307 DEBUG [RS:1;jenkins-hbase4:36093] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 18:15:53,307 DEBUG [RS:1;jenkins-hbase4:36093] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 18:15:53,307 DEBUG [RS:1;jenkins-hbase4:36093] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 18:15:53,307 DEBUG [RS:2;jenkins-hbase4:34637] zookeeper.ZKUtil(162): regionserver:34637-0x1016f5907f20003, quorum=127.0.0.1:58951, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,36093,1689531352707 2023-07-16 18:15:53,308 DEBUG [RS:2;jenkins-hbase4:34637] zookeeper.ZKUtil(162): regionserver:34637-0x1016f5907f20003, quorum=127.0.0.1:58951, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,39753,1689531352528 2023-07-16 18:15:53,308 DEBUG [RS:2;jenkins-hbase4:34637] zookeeper.ZKUtil(162): regionserver:34637-0x1016f5907f20003, quorum=127.0.0.1:58951, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,34637,1689531352868 2023-07-16 18:15:53,309 DEBUG [RS:2;jenkins-hbase4:34637] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-16 18:15:53,309 INFO [RS:2;jenkins-hbase4:34637] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-16 18:15:53,312 INFO [RS:0;jenkins-hbase4:39753] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-16 18:15:53,312 INFO [RS:0;jenkins-hbase4:39753] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,39753,1689531352528-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-16 18:15:53,313 INFO [RS:2;jenkins-hbase4:34637] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-16 18:15:53,313 INFO [RS:1;jenkins-hbase4:36093] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-16 18:15:53,313 INFO [RS:1;jenkins-hbase4:36093] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-16 18:15:53,314 INFO [RS:1;jenkins-hbase4:36093] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-16 18:15:53,314 INFO [RS:1;jenkins-hbase4:36093] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-16 18:15:53,314 INFO [RS:2;jenkins-hbase4:34637] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-16 18:15:53,314 INFO [RS:2;jenkins-hbase4:34637] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-16 18:15:53,314 INFO [RS:2;jenkins-hbase4:34637] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-16 18:15:53,316 INFO [RS:2;jenkins-hbase4:34637] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-16 18:15:53,316 DEBUG [RS:2;jenkins-hbase4:34637] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 18:15:53,316 DEBUG [RS:2;jenkins-hbase4:34637] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 18:15:53,316 DEBUG [RS:2;jenkins-hbase4:34637] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 18:15:53,316 DEBUG [RS:2;jenkins-hbase4:34637] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 18:15:53,316 DEBUG [RS:2;jenkins-hbase4:34637] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 18:15:53,316 DEBUG [RS:2;jenkins-hbase4:34637] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-16 18:15:53,316 DEBUG [RS:2;jenkins-hbase4:34637] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 18:15:53,316 DEBUG [RS:2;jenkins-hbase4:34637] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 18:15:53,316 DEBUG [RS:2;jenkins-hbase4:34637] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 18:15:53,317 DEBUG [RS:2;jenkins-hbase4:34637] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 18:15:53,319 INFO [RS:2;jenkins-hbase4:34637] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-16 18:15:53,319 INFO [RS:2;jenkins-hbase4:34637] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-16 18:15:53,319 INFO [RS:2;jenkins-hbase4:34637] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-16 18:15:53,319 INFO [RS:2;jenkins-hbase4:34637] hbase.ChoreService(166): Chore ScheduledChore name=FileSystemUtilizationChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-16 18:15:53,326 INFO [RS:1;jenkins-hbase4:36093] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-16 18:15:53,326 INFO [RS:1;jenkins-hbase4:36093] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,36093,1689531352707-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-16 18:15:53,326 INFO [RS:0;jenkins-hbase4:39753] regionserver.Replication(203): jenkins-hbase4.apache.org,39753,1689531352528 started 2023-07-16 18:15:53,326 INFO [RS:0;jenkins-hbase4:39753] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,39753,1689531352528, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:39753, sessionid=0x1016f5907f20001 2023-07-16 18:15:53,326 DEBUG [RS:0;jenkins-hbase4:39753] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-16 18:15:53,326 DEBUG [RS:0;jenkins-hbase4:39753] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,39753,1689531352528 2023-07-16 18:15:53,327 DEBUG [RS:0;jenkins-hbase4:39753] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,39753,1689531352528' 2023-07-16 18:15:53,327 DEBUG [RS:0;jenkins-hbase4:39753] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-16 18:15:53,327 DEBUG [RS:0;jenkins-hbase4:39753] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-16 18:15:53,327 DEBUG [RS:0;jenkins-hbase4:39753] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-16 18:15:53,327 DEBUG [RS:0;jenkins-hbase4:39753] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-16 18:15:53,327 DEBUG [RS:0;jenkins-hbase4:39753] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,39753,1689531352528 2023-07-16 18:15:53,327 DEBUG [RS:0;jenkins-hbase4:39753] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,39753,1689531352528' 2023-07-16 18:15:53,327 DEBUG [RS:0;jenkins-hbase4:39753] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-16 18:15:53,328 DEBUG [RS:0;jenkins-hbase4:39753] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-16 18:15:53,328 DEBUG [RS:0;jenkins-hbase4:39753] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-16 18:15:53,328 INFO [RS:0;jenkins-hbase4:39753] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-07-16 18:15:53,331 INFO [RS:0;jenkins-hbase4:39753] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-16 18:15:53,331 DEBUG [RS:0;jenkins-hbase4:39753] zookeeper.ZKUtil(398): regionserver:39753-0x1016f5907f20001, quorum=127.0.0.1:58951, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-07-16 18:15:53,332 INFO [RS:0;jenkins-hbase4:39753] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-07-16 18:15:53,332 INFO [RS:0;jenkins-hbase4:39753] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-16 18:15:53,332 INFO [RS:0;jenkins-hbase4:39753] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-16 18:15:53,337 INFO [RS:2;jenkins-hbase4:34637] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-16 18:15:53,337 INFO [RS:2;jenkins-hbase4:34637] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,34637,1689531352868-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-16 18:15:53,339 INFO [RS:1;jenkins-hbase4:36093] regionserver.Replication(203): jenkins-hbase4.apache.org,36093,1689531352707 started 2023-07-16 18:15:53,339 INFO [RS:1;jenkins-hbase4:36093] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,36093,1689531352707, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:36093, sessionid=0x1016f5907f20002 2023-07-16 18:15:53,339 DEBUG [RS:1;jenkins-hbase4:36093] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-16 18:15:53,339 DEBUG [RS:1;jenkins-hbase4:36093] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,36093,1689531352707 2023-07-16 18:15:53,339 DEBUG [RS:1;jenkins-hbase4:36093] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,36093,1689531352707' 2023-07-16 18:15:53,339 DEBUG [RS:1;jenkins-hbase4:36093] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-16 18:15:53,339 DEBUG [RS:1;jenkins-hbase4:36093] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-16 18:15:53,340 DEBUG [RS:1;jenkins-hbase4:36093] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-16 18:15:53,340 DEBUG [RS:1;jenkins-hbase4:36093] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-16 18:15:53,340 DEBUG [RS:1;jenkins-hbase4:36093] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,36093,1689531352707 2023-07-16 18:15:53,340 DEBUG [RS:1;jenkins-hbase4:36093] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,36093,1689531352707' 2023-07-16 18:15:53,340 DEBUG [RS:1;jenkins-hbase4:36093] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-16 18:15:53,340 DEBUG [RS:1;jenkins-hbase4:36093] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-16 18:15:53,340 DEBUG [RS:1;jenkins-hbase4:36093] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-16 18:15:53,340 INFO [RS:1;jenkins-hbase4:36093] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-07-16 18:15:53,340 INFO [RS:1;jenkins-hbase4:36093] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-16 18:15:53,341 DEBUG [RS:1;jenkins-hbase4:36093] zookeeper.ZKUtil(398): regionserver:36093-0x1016f5907f20002, quorum=127.0.0.1:58951, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-07-16 18:15:53,341 INFO [RS:1;jenkins-hbase4:36093] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-07-16 18:15:53,341 INFO [RS:1;jenkins-hbase4:36093] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-16 18:15:53,341 INFO [RS:1;jenkins-hbase4:36093] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-16 18:15:53,353 INFO [RS:2;jenkins-hbase4:34637] regionserver.Replication(203): jenkins-hbase4.apache.org,34637,1689531352868 started 2023-07-16 18:15:53,354 INFO [RS:2;jenkins-hbase4:34637] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,34637,1689531352868, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:34637, sessionid=0x1016f5907f20003 2023-07-16 18:15:53,354 DEBUG [RS:2;jenkins-hbase4:34637] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-16 18:15:53,354 DEBUG [RS:2;jenkins-hbase4:34637] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,34637,1689531352868 2023-07-16 18:15:53,354 DEBUG [RS:2;jenkins-hbase4:34637] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,34637,1689531352868' 2023-07-16 18:15:53,354 DEBUG [RS:2;jenkins-hbase4:34637] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-16 18:15:53,354 DEBUG [RS:2;jenkins-hbase4:34637] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-16 18:15:53,355 DEBUG [RS:2;jenkins-hbase4:34637] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-16 18:15:53,355 DEBUG [RS:2;jenkins-hbase4:34637] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-16 18:15:53,355 DEBUG [RS:2;jenkins-hbase4:34637] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,34637,1689531352868 2023-07-16 18:15:53,355 DEBUG [RS:2;jenkins-hbase4:34637] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,34637,1689531352868' 2023-07-16 18:15:53,355 DEBUG [RS:2;jenkins-hbase4:34637] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-16 18:15:53,355 DEBUG [RS:2;jenkins-hbase4:34637] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-16 18:15:53,355 DEBUG [RS:2;jenkins-hbase4:34637] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-16 18:15:53,355 INFO [RS:2;jenkins-hbase4:34637] quotas.RegionServerRpcQuotaManager(67): Initializing RPC quota support 2023-07-16 18:15:53,355 INFO [RS:2;jenkins-hbase4:34637] hbase.ChoreService(166): Chore ScheduledChore name=QuotaRefresherChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-16 18:15:53,356 DEBUG [RS:2;jenkins-hbase4:34637] zookeeper.ZKUtil(398): regionserver:34637-0x1016f5907f20003, quorum=127.0.0.1:58951, baseZNode=/hbase Unable to get data of znode /hbase/rpc-throttle because node does not exist (not an error) 2023-07-16 18:15:53,356 INFO [RS:2;jenkins-hbase4:34637] quotas.RegionServerRpcQuotaManager(73): Start rpc quota manager and rpc throttle enabled is true 2023-07-16 18:15:53,356 INFO [RS:2;jenkins-hbase4:34637] hbase.ChoreService(166): Chore ScheduledChore name=SpaceQuotaRefresherChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-16 18:15:53,356 INFO [RS:2;jenkins-hbase4:34637] hbase.ChoreService(166): Chore ScheduledChore name=RegionSizeReportingChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-16 18:15:53,430 DEBUG [jenkins-hbase4:44131] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-16 18:15:53,431 DEBUG [jenkins-hbase4:44131] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-16 18:15:53,431 DEBUG [jenkins-hbase4:44131] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-16 18:15:53,431 DEBUG [jenkins-hbase4:44131] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-16 18:15:53,431 DEBUG [jenkins-hbase4:44131] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-16 18:15:53,431 DEBUG [jenkins-hbase4:44131] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-16 18:15:53,432 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,34637,1689531352868, state=OPENING 2023-07-16 18:15:53,434 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-07-16 18:15:53,435 DEBUG [Listener at localhost/33941-EventThread] zookeeper.ZKWatcher(600): master:44131-0x1016f5907f20000, quorum=127.0.0.1:58951, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-16 18:15:53,436 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-16 18:15:53,439 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,34637,1689531352868}] 2023-07-16 18:15:53,439 INFO [RS:0;jenkins-hbase4:39753] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C39753%2C1689531352528, suffix=, logDir=hdfs://localhost:40765/user/jenkins/test-data/9891f0a6-513d-b33f-4820-41cbcf26319e/WALs/jenkins-hbase4.apache.org,39753,1689531352528, archiveDir=hdfs://localhost:40765/user/jenkins/test-data/9891f0a6-513d-b33f-4820-41cbcf26319e/oldWALs, maxLogs=32 2023-07-16 18:15:53,443 INFO [RS:1;jenkins-hbase4:36093] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C36093%2C1689531352707, suffix=, logDir=hdfs://localhost:40765/user/jenkins/test-data/9891f0a6-513d-b33f-4820-41cbcf26319e/WALs/jenkins-hbase4.apache.org,36093,1689531352707, archiveDir=hdfs://localhost:40765/user/jenkins/test-data/9891f0a6-513d-b33f-4820-41cbcf26319e/oldWALs, maxLogs=32 2023-07-16 18:15:53,459 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43791,DS-29c49014-e67d-4232-82d5-0b72dfba0857,DISK] 2023-07-16 18:15:53,459 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41335,DS-c499a704-5e35-4d4a-b751-ff215efc7044,DISK] 2023-07-16 18:15:53,459 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34759,DS-695eb77d-ee54-4846-805d-18783915c745,DISK] 2023-07-16 18:15:53,460 INFO [RS:2;jenkins-hbase4:34637] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C34637%2C1689531352868, suffix=, logDir=hdfs://localhost:40765/user/jenkins/test-data/9891f0a6-513d-b33f-4820-41cbcf26319e/WALs/jenkins-hbase4.apache.org,34637,1689531352868, archiveDir=hdfs://localhost:40765/user/jenkins/test-data/9891f0a6-513d-b33f-4820-41cbcf26319e/oldWALs, maxLogs=32 2023-07-16 18:15:53,462 INFO [RS:0;jenkins-hbase4:39753] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/9891f0a6-513d-b33f-4820-41cbcf26319e/WALs/jenkins-hbase4.apache.org,39753,1689531352528/jenkins-hbase4.apache.org%2C39753%2C1689531352528.1689531353440 2023-07-16 18:15:53,466 DEBUG [RS:0;jenkins-hbase4:39753] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:43791,DS-29c49014-e67d-4232-82d5-0b72dfba0857,DISK], DatanodeInfoWithStorage[127.0.0.1:34759,DS-695eb77d-ee54-4846-805d-18783915c745,DISK], DatanodeInfoWithStorage[127.0.0.1:41335,DS-c499a704-5e35-4d4a-b751-ff215efc7044,DISK]] 2023-07-16 18:15:53,469 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41335,DS-c499a704-5e35-4d4a-b751-ff215efc7044,DISK] 2023-07-16 18:15:53,469 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34759,DS-695eb77d-ee54-4846-805d-18783915c745,DISK] 2023-07-16 18:15:53,469 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43791,DS-29c49014-e67d-4232-82d5-0b72dfba0857,DISK] 2023-07-16 18:15:53,474 INFO [RS:1;jenkins-hbase4:36093] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/9891f0a6-513d-b33f-4820-41cbcf26319e/WALs/jenkins-hbase4.apache.org,36093,1689531352707/jenkins-hbase4.apache.org%2C36093%2C1689531352707.1689531353444 2023-07-16 18:15:53,474 DEBUG [RS:1;jenkins-hbase4:36093] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:34759,DS-695eb77d-ee54-4846-805d-18783915c745,DISK], DatanodeInfoWithStorage[127.0.0.1:43791,DS-29c49014-e67d-4232-82d5-0b72dfba0857,DISK], DatanodeInfoWithStorage[127.0.0.1:41335,DS-c499a704-5e35-4d4a-b751-ff215efc7044,DISK]] 2023-07-16 18:15:53,483 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43791,DS-29c49014-e67d-4232-82d5-0b72dfba0857,DISK] 2023-07-16 18:15:53,483 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41335,DS-c499a704-5e35-4d4a-b751-ff215efc7044,DISK] 2023-07-16 18:15:53,486 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34759,DS-695eb77d-ee54-4846-805d-18783915c745,DISK] 2023-07-16 18:15:53,488 INFO [RS:2;jenkins-hbase4:34637] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/9891f0a6-513d-b33f-4820-41cbcf26319e/WALs/jenkins-hbase4.apache.org,34637,1689531352868/jenkins-hbase4.apache.org%2C34637%2C1689531352868.1689531353461 2023-07-16 18:15:53,491 DEBUG [RS:2;jenkins-hbase4:34637] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:43791,DS-29c49014-e67d-4232-82d5-0b72dfba0857,DISK], DatanodeInfoWithStorage[127.0.0.1:41335,DS-c499a704-5e35-4d4a-b751-ff215efc7044,DISK], DatanodeInfoWithStorage[127.0.0.1:34759,DS-695eb77d-ee54-4846-805d-18783915c745,DISK]] 2023-07-16 18:15:53,492 WARN [ReadOnlyZKClient-127.0.0.1:58951@0x5baa060a] client.ZKConnectionRegistry(168): Meta region is in state OPENING 2023-07-16 18:15:53,492 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,44131,1689531352332] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-16 18:15:53,493 INFO [RS-EventLoopGroup-11-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:58350, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-16 18:15:53,493 DEBUG [RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=34637] ipc.CallRunner(144): callId: 0 service: ClientService methodName: Get size: 88 connection: 172.31.14.131:58350 deadline: 1689531413493, exception=org.apache.hadoop.hbase.NotServingRegionException: hbase:meta,,1 is not online on jenkins-hbase4.apache.org,34637,1689531352868 2023-07-16 18:15:53,593 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,34637,1689531352868 2023-07-16 18:15:53,596 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-16 18:15:53,597 INFO [RS-EventLoopGroup-11-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:58352, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-16 18:15:53,607 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-16 18:15:53,607 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-16 18:15:53,609 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C34637%2C1689531352868.meta, suffix=.meta, logDir=hdfs://localhost:40765/user/jenkins/test-data/9891f0a6-513d-b33f-4820-41cbcf26319e/WALs/jenkins-hbase4.apache.org,34637,1689531352868, archiveDir=hdfs://localhost:40765/user/jenkins/test-data/9891f0a6-513d-b33f-4820-41cbcf26319e/oldWALs, maxLogs=32 2023-07-16 18:15:53,625 DEBUG [RS-EventLoopGroup-11-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41335,DS-c499a704-5e35-4d4a-b751-ff215efc7044,DISK] 2023-07-16 18:15:53,625 DEBUG [RS-EventLoopGroup-11-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:34759,DS-695eb77d-ee54-4846-805d-18783915c745,DISK] 2023-07-16 18:15:53,625 DEBUG [RS-EventLoopGroup-11-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:43791,DS-29c49014-e67d-4232-82d5-0b72dfba0857,DISK] 2023-07-16 18:15:53,628 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/9891f0a6-513d-b33f-4820-41cbcf26319e/WALs/jenkins-hbase4.apache.org,34637,1689531352868/jenkins-hbase4.apache.org%2C34637%2C1689531352868.meta.1689531353609.meta 2023-07-16 18:15:53,628 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:41335,DS-c499a704-5e35-4d4a-b751-ff215efc7044,DISK], DatanodeInfoWithStorage[127.0.0.1:34759,DS-695eb77d-ee54-4846-805d-18783915c745,DISK], DatanodeInfoWithStorage[127.0.0.1:43791,DS-29c49014-e67d-4232-82d5-0b72dfba0857,DISK]] 2023-07-16 18:15:53,628 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-16 18:15:53,628 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-16 18:15:53,629 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-16 18:15:53,629 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-16 18:15:53,629 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-16 18:15:53,629 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 18:15:53,629 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-16 18:15:53,629 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-16 18:15:53,630 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-16 18:15:53,631 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40765/user/jenkins/test-data/9891f0a6-513d-b33f-4820-41cbcf26319e/data/hbase/meta/1588230740/info 2023-07-16 18:15:53,631 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40765/user/jenkins/test-data/9891f0a6-513d-b33f-4820-41cbcf26319e/data/hbase/meta/1588230740/info 2023-07-16 18:15:53,631 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-16 18:15:53,632 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 18:15:53,632 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-16 18:15:53,633 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40765/user/jenkins/test-data/9891f0a6-513d-b33f-4820-41cbcf26319e/data/hbase/meta/1588230740/rep_barrier 2023-07-16 18:15:53,633 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40765/user/jenkins/test-data/9891f0a6-513d-b33f-4820-41cbcf26319e/data/hbase/meta/1588230740/rep_barrier 2023-07-16 18:15:53,633 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-16 18:15:53,633 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 18:15:53,633 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-16 18:15:53,634 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40765/user/jenkins/test-data/9891f0a6-513d-b33f-4820-41cbcf26319e/data/hbase/meta/1588230740/table 2023-07-16 18:15:53,634 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40765/user/jenkins/test-data/9891f0a6-513d-b33f-4820-41cbcf26319e/data/hbase/meta/1588230740/table 2023-07-16 18:15:53,634 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-16 18:15:53,635 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 18:15:53,635 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40765/user/jenkins/test-data/9891f0a6-513d-b33f-4820-41cbcf26319e/data/hbase/meta/1588230740 2023-07-16 18:15:53,636 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40765/user/jenkins/test-data/9891f0a6-513d-b33f-4820-41cbcf26319e/data/hbase/meta/1588230740 2023-07-16 18:15:53,639 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-16 18:15:53,640 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-16 18:15:53,641 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11203933280, jitterRate=0.043447598814964294}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-16 18:15:53,641 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-16 18:15:53,642 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1689531353593 2023-07-16 18:15:53,646 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-16 18:15:53,646 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-16 18:15:53,647 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,34637,1689531352868, state=OPEN 2023-07-16 18:15:53,648 DEBUG [Listener at localhost/33941-EventThread] zookeeper.ZKWatcher(600): master:44131-0x1016f5907f20000, quorum=127.0.0.1:58951, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-16 18:15:53,648 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-16 18:15:53,650 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-07-16 18:15:53,650 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,34637,1689531352868 in 212 msec 2023-07-16 18:15:53,651 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-07-16 18:15:53,652 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 373 msec 2023-07-16 18:15:53,653 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 466 msec 2023-07-16 18:15:53,653 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1689531353653, completionTime=-1 2023-07-16 18:15:53,653 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=0ms, expected min=3 server(s), max=3 server(s), master is running 2023-07-16 18:15:53,653 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-16 18:15:53,657 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-16 18:15:53,657 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1689531413657 2023-07-16 18:15:53,657 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1689531473657 2023-07-16 18:15:53,657 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 3 msec 2023-07-16 18:15:53,663 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,44131,1689531352332-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-16 18:15:53,663 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,44131,1689531352332-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-16 18:15:53,663 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,44131,1689531352332-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-16 18:15:53,663 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:44131, period=300000, unit=MILLISECONDS is enabled. 2023-07-16 18:15:53,663 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-16 18:15:53,663 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-07-16 18:15:53,664 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-16 18:15:53,664 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-07-16 18:15:53,664 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-07-16 18:15:53,666 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-07-16 18:15:53,666 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-16 18:15:53,668 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40765/user/jenkins/test-data/9891f0a6-513d-b33f-4820-41cbcf26319e/.tmp/data/hbase/namespace/85d499a4717a980e9af0a5a4eb9fddf2 2023-07-16 18:15:53,668 DEBUG [HFileArchiver-8] backup.HFileArchiver(153): Directory hdfs://localhost:40765/user/jenkins/test-data/9891f0a6-513d-b33f-4820-41cbcf26319e/.tmp/data/hbase/namespace/85d499a4717a980e9af0a5a4eb9fddf2 empty. 2023-07-16 18:15:53,669 DEBUG [HFileArchiver-8] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:40765/user/jenkins/test-data/9891f0a6-513d-b33f-4820-41cbcf26319e/.tmp/data/hbase/namespace/85d499a4717a980e9af0a5a4eb9fddf2 2023-07-16 18:15:53,669 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-07-16 18:15:53,684 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:40765/user/jenkins/test-data/9891f0a6-513d-b33f-4820-41cbcf26319e/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-07-16 18:15:53,687 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 85d499a4717a980e9af0a5a4eb9fddf2, NAME => 'hbase:namespace,,1689531353663.85d499a4717a980e9af0a5a4eb9fddf2.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40765/user/jenkins/test-data/9891f0a6-513d-b33f-4820-41cbcf26319e/.tmp 2023-07-16 18:15:53,699 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689531353663.85d499a4717a980e9af0a5a4eb9fddf2.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 18:15:53,699 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 85d499a4717a980e9af0a5a4eb9fddf2, disabling compactions & flushes 2023-07-16 18:15:53,700 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689531353663.85d499a4717a980e9af0a5a4eb9fddf2. 2023-07-16 18:15:53,700 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689531353663.85d499a4717a980e9af0a5a4eb9fddf2. 2023-07-16 18:15:53,700 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689531353663.85d499a4717a980e9af0a5a4eb9fddf2. after waiting 0 ms 2023-07-16 18:15:53,700 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689531353663.85d499a4717a980e9af0a5a4eb9fddf2. 2023-07-16 18:15:53,700 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689531353663.85d499a4717a980e9af0a5a4eb9fddf2. 2023-07-16 18:15:53,700 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 85d499a4717a980e9af0a5a4eb9fddf2: 2023-07-16 18:15:53,702 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-07-16 18:15:53,703 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1689531353663.85d499a4717a980e9af0a5a4eb9fddf2.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689531353703"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689531353703"}]},"ts":"1689531353703"} 2023-07-16 18:15:53,706 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-16 18:15:53,706 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-16 18:15:53,707 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689531353707"}]},"ts":"1689531353707"} 2023-07-16 18:15:53,708 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-07-16 18:15:53,711 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-16 18:15:53,712 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-16 18:15:53,712 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-16 18:15:53,712 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-16 18:15:53,712 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-16 18:15:53,712 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=85d499a4717a980e9af0a5a4eb9fddf2, ASSIGN}] 2023-07-16 18:15:53,713 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=85d499a4717a980e9af0a5a4eb9fddf2, ASSIGN 2023-07-16 18:15:53,714 INFO [PEWorker-4] assignment.TransitRegionStateProcedure(193): Starting pid=5, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=85d499a4717a980e9af0a5a4eb9fddf2, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,36093,1689531352707; forceNewPlan=false, retain=false 2023-07-16 18:15:53,795 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,44131,1689531352332] master.HMaster(2148): Client=null/null create 'hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-16 18:15:53,797 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,44131,1689531352332] procedure2.ProcedureExecutor(1029): Stored pid=6, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-16 18:15:53,799 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-16 18:15:53,800 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-16 18:15:53,802 DEBUG [HFileArchiver-4] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40765/user/jenkins/test-data/9891f0a6-513d-b33f-4820-41cbcf26319e/.tmp/data/hbase/rsgroup/285248e8dfba1a8879a256a37f9acc4d 2023-07-16 18:15:53,802 DEBUG [HFileArchiver-4] backup.HFileArchiver(153): Directory hdfs://localhost:40765/user/jenkins/test-data/9891f0a6-513d-b33f-4820-41cbcf26319e/.tmp/data/hbase/rsgroup/285248e8dfba1a8879a256a37f9acc4d empty. 2023-07-16 18:15:53,803 DEBUG [HFileArchiver-4] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:40765/user/jenkins/test-data/9891f0a6-513d-b33f-4820-41cbcf26319e/.tmp/data/hbase/rsgroup/285248e8dfba1a8879a256a37f9acc4d 2023-07-16 18:15:53,803 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived hbase:rsgroup regions 2023-07-16 18:15:53,823 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:40765/user/jenkins/test-data/9891f0a6-513d-b33f-4820-41cbcf26319e/.tmp/data/hbase/rsgroup/.tabledesc/.tableinfo.0000000001 2023-07-16 18:15:53,824 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => 285248e8dfba1a8879a256a37f9acc4d, NAME => 'hbase:rsgroup,,1689531353795.285248e8dfba1a8879a256a37f9acc4d.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40765/user/jenkins/test-data/9891f0a6-513d-b33f-4820-41cbcf26319e/.tmp 2023-07-16 18:15:53,836 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689531353795.285248e8dfba1a8879a256a37f9acc4d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 18:15:53,836 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1604): Closing 285248e8dfba1a8879a256a37f9acc4d, disabling compactions & flushes 2023-07-16 18:15:53,836 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689531353795.285248e8dfba1a8879a256a37f9acc4d. 2023-07-16 18:15:53,836 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689531353795.285248e8dfba1a8879a256a37f9acc4d. 2023-07-16 18:15:53,836 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689531353795.285248e8dfba1a8879a256a37f9acc4d. after waiting 0 ms 2023-07-16 18:15:53,836 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689531353795.285248e8dfba1a8879a256a37f9acc4d. 2023-07-16 18:15:53,836 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689531353795.285248e8dfba1a8879a256a37f9acc4d. 2023-07-16 18:15:53,836 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1558): Region close journal for 285248e8dfba1a8879a256a37f9acc4d: 2023-07-16 18:15:53,839 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-16 18:15:53,840 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689531353795.285248e8dfba1a8879a256a37f9acc4d.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689531353839"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689531353839"}]},"ts":"1689531353839"} 2023-07-16 18:15:53,841 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-16 18:15:53,842 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-16 18:15:53,842 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689531353842"}]},"ts":"1689531353842"} 2023-07-16 18:15:53,843 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLING in hbase:meta 2023-07-16 18:15:53,847 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-16 18:15:53,847 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-16 18:15:53,847 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-16 18:15:53,848 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-16 18:15:53,848 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-16 18:15:53,848 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=7, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=285248e8dfba1a8879a256a37f9acc4d, ASSIGN}] 2023-07-16 18:15:53,849 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=7, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=285248e8dfba1a8879a256a37f9acc4d, ASSIGN 2023-07-16 18:15:53,849 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=7, ppid=6, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=285248e8dfba1a8879a256a37f9acc4d, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,34637,1689531352868; forceNewPlan=false, retain=false 2023-07-16 18:15:53,849 INFO [jenkins-hbase4:44131] balancer.BaseLoadBalancer(1545): Reassigned 2 regions. 2 retained the pre-restart assignment. 2023-07-16 18:15:53,851 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=85d499a4717a980e9af0a5a4eb9fddf2, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,36093,1689531352707 2023-07-16 18:15:53,852 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689531353663.85d499a4717a980e9af0a5a4eb9fddf2.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689531353851"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689531353851"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689531353851"}]},"ts":"1689531353851"} 2023-07-16 18:15:53,852 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=285248e8dfba1a8879a256a37f9acc4d, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,34637,1689531352868 2023-07-16 18:15:53,852 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689531353795.285248e8dfba1a8879a256a37f9acc4d.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689531353852"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689531353852"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689531353852"}]},"ts":"1689531353852"} 2023-07-16 18:15:53,853 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=8, ppid=5, state=RUNNABLE; OpenRegionProcedure 85d499a4717a980e9af0a5a4eb9fddf2, server=jenkins-hbase4.apache.org,36093,1689531352707}] 2023-07-16 18:15:53,853 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=9, ppid=7, state=RUNNABLE; OpenRegionProcedure 285248e8dfba1a8879a256a37f9acc4d, server=jenkins-hbase4.apache.org,34637,1689531352868}] 2023-07-16 18:15:54,005 DEBUG [RSProcedureDispatcher-pool-1] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,36093,1689531352707 2023-07-16 18:15:54,006 DEBUG [RSProcedureDispatcher-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-16 18:15:54,007 INFO [RS-EventLoopGroup-10-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:46260, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-16 18:15:54,011 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689531353795.285248e8dfba1a8879a256a37f9acc4d. 2023-07-16 18:15:54,011 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 285248e8dfba1a8879a256a37f9acc4d, NAME => 'hbase:rsgroup,,1689531353795.285248e8dfba1a8879a256a37f9acc4d.', STARTKEY => '', ENDKEY => ''} 2023-07-16 18:15:54,011 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-16 18:15:54,011 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689531353795.285248e8dfba1a8879a256a37f9acc4d. service=MultiRowMutationService 2023-07-16 18:15:54,011 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-16 18:15:54,012 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup 285248e8dfba1a8879a256a37f9acc4d 2023-07-16 18:15:54,012 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689531353795.285248e8dfba1a8879a256a37f9acc4d.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 18:15:54,012 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 285248e8dfba1a8879a256a37f9acc4d 2023-07-16 18:15:54,012 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 285248e8dfba1a8879a256a37f9acc4d 2023-07-16 18:15:54,013 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689531353663.85d499a4717a980e9af0a5a4eb9fddf2. 2023-07-16 18:15:54,013 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 85d499a4717a980e9af0a5a4eb9fddf2, NAME => 'hbase:namespace,,1689531353663.85d499a4717a980e9af0a5a4eb9fddf2.', STARTKEY => '', ENDKEY => ''} 2023-07-16 18:15:54,013 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 85d499a4717a980e9af0a5a4eb9fddf2 2023-07-16 18:15:54,013 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689531353663.85d499a4717a980e9af0a5a4eb9fddf2.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 18:15:54,013 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 85d499a4717a980e9af0a5a4eb9fddf2 2023-07-16 18:15:54,013 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 85d499a4717a980e9af0a5a4eb9fddf2 2023-07-16 18:15:54,014 INFO [StoreOpener-285248e8dfba1a8879a256a37f9acc4d-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region 285248e8dfba1a8879a256a37f9acc4d 2023-07-16 18:15:54,014 INFO [StoreOpener-85d499a4717a980e9af0a5a4eb9fddf2-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 85d499a4717a980e9af0a5a4eb9fddf2 2023-07-16 18:15:54,015 DEBUG [StoreOpener-285248e8dfba1a8879a256a37f9acc4d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40765/user/jenkins/test-data/9891f0a6-513d-b33f-4820-41cbcf26319e/data/hbase/rsgroup/285248e8dfba1a8879a256a37f9acc4d/m 2023-07-16 18:15:54,015 DEBUG [StoreOpener-285248e8dfba1a8879a256a37f9acc4d-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40765/user/jenkins/test-data/9891f0a6-513d-b33f-4820-41cbcf26319e/data/hbase/rsgroup/285248e8dfba1a8879a256a37f9acc4d/m 2023-07-16 18:15:54,016 DEBUG [StoreOpener-85d499a4717a980e9af0a5a4eb9fddf2-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40765/user/jenkins/test-data/9891f0a6-513d-b33f-4820-41cbcf26319e/data/hbase/namespace/85d499a4717a980e9af0a5a4eb9fddf2/info 2023-07-16 18:15:54,016 DEBUG [StoreOpener-85d499a4717a980e9af0a5a4eb9fddf2-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40765/user/jenkins/test-data/9891f0a6-513d-b33f-4820-41cbcf26319e/data/hbase/namespace/85d499a4717a980e9af0a5a4eb9fddf2/info 2023-07-16 18:15:54,016 INFO [StoreOpener-285248e8dfba1a8879a256a37f9acc4d-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 285248e8dfba1a8879a256a37f9acc4d columnFamilyName m 2023-07-16 18:15:54,016 INFO [StoreOpener-85d499a4717a980e9af0a5a4eb9fddf2-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 85d499a4717a980e9af0a5a4eb9fddf2 columnFamilyName info 2023-07-16 18:15:54,017 INFO [StoreOpener-285248e8dfba1a8879a256a37f9acc4d-1] regionserver.HStore(310): Store=285248e8dfba1a8879a256a37f9acc4d/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 18:15:54,017 INFO [StoreOpener-85d499a4717a980e9af0a5a4eb9fddf2-1] regionserver.HStore(310): Store=85d499a4717a980e9af0a5a4eb9fddf2/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 18:15:54,017 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40765/user/jenkins/test-data/9891f0a6-513d-b33f-4820-41cbcf26319e/data/hbase/rsgroup/285248e8dfba1a8879a256a37f9acc4d 2023-07-16 18:15:54,017 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40765/user/jenkins/test-data/9891f0a6-513d-b33f-4820-41cbcf26319e/data/hbase/namespace/85d499a4717a980e9af0a5a4eb9fddf2 2023-07-16 18:15:54,018 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40765/user/jenkins/test-data/9891f0a6-513d-b33f-4820-41cbcf26319e/data/hbase/rsgroup/285248e8dfba1a8879a256a37f9acc4d 2023-07-16 18:15:54,018 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40765/user/jenkins/test-data/9891f0a6-513d-b33f-4820-41cbcf26319e/data/hbase/namespace/85d499a4717a980e9af0a5a4eb9fddf2 2023-07-16 18:15:54,021 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 285248e8dfba1a8879a256a37f9acc4d 2023-07-16 18:15:54,021 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 85d499a4717a980e9af0a5a4eb9fddf2 2023-07-16 18:15:54,025 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40765/user/jenkins/test-data/9891f0a6-513d-b33f-4820-41cbcf26319e/data/hbase/rsgroup/285248e8dfba1a8879a256a37f9acc4d/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-16 18:15:54,025 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40765/user/jenkins/test-data/9891f0a6-513d-b33f-4820-41cbcf26319e/data/hbase/namespace/85d499a4717a980e9af0a5a4eb9fddf2/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-16 18:15:54,025 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 285248e8dfba1a8879a256a37f9acc4d; next sequenceid=2; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@10c680a, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 18:15:54,025 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 285248e8dfba1a8879a256a37f9acc4d: 2023-07-16 18:15:54,025 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 85d499a4717a980e9af0a5a4eb9fddf2; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11951938720, jitterRate=0.11311103403568268}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 18:15:54,026 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 85d499a4717a980e9af0a5a4eb9fddf2: 2023-07-16 18:15:54,026 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689531353795.285248e8dfba1a8879a256a37f9acc4d., pid=9, masterSystemTime=1689531354006 2023-07-16 18:15:54,026 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689531353663.85d499a4717a980e9af0a5a4eb9fddf2., pid=8, masterSystemTime=1689531354005 2023-07-16 18:15:54,030 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689531353795.285248e8dfba1a8879a256a37f9acc4d. 2023-07-16 18:15:54,030 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689531353795.285248e8dfba1a8879a256a37f9acc4d. 2023-07-16 18:15:54,031 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=285248e8dfba1a8879a256a37f9acc4d, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,34637,1689531352868 2023-07-16 18:15:54,031 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689531353663.85d499a4717a980e9af0a5a4eb9fddf2. 2023-07-16 18:15:54,031 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689531353795.285248e8dfba1a8879a256a37f9acc4d.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689531354031"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689531354031"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689531354031"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689531354031"}]},"ts":"1689531354031"} 2023-07-16 18:15:54,032 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689531353663.85d499a4717a980e9af0a5a4eb9fddf2. 2023-07-16 18:15:54,032 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=5 updating hbase:meta row=85d499a4717a980e9af0a5a4eb9fddf2, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,36093,1689531352707 2023-07-16 18:15:54,032 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689531353663.85d499a4717a980e9af0a5a4eb9fddf2.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689531354032"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689531354032"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689531354032"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689531354032"}]},"ts":"1689531354032"} 2023-07-16 18:15:54,034 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=9, resume processing ppid=7 2023-07-16 18:15:54,035 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=9, ppid=7, state=SUCCESS; OpenRegionProcedure 285248e8dfba1a8879a256a37f9acc4d, server=jenkins-hbase4.apache.org,34637,1689531352868 in 180 msec 2023-07-16 18:15:54,036 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=8, resume processing ppid=5 2023-07-16 18:15:54,037 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=8, ppid=5, state=SUCCESS; OpenRegionProcedure 85d499a4717a980e9af0a5a4eb9fddf2, server=jenkins-hbase4.apache.org,36093,1689531352707 in 182 msec 2023-07-16 18:15:54,037 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=7, resume processing ppid=6 2023-07-16 18:15:54,037 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=7, ppid=6, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=285248e8dfba1a8879a256a37f9acc4d, ASSIGN in 187 msec 2023-07-16 18:15:54,041 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-16 18:15:54,041 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689531354041"}]},"ts":"1689531354041"} 2023-07-16 18:15:54,042 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=5, resume processing ppid=4 2023-07-16 18:15:54,042 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=5, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=85d499a4717a980e9af0a5a4eb9fddf2, ASSIGN in 325 msec 2023-07-16 18:15:54,042 INFO [PEWorker-1] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLED in hbase:meta 2023-07-16 18:15:54,042 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-16 18:15:54,043 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689531354042"}]},"ts":"1689531354042"} 2023-07-16 18:15:54,044 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-07-16 18:15:54,044 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=6, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-16 18:15:54,046 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=6, state=SUCCESS; CreateTableProcedure table=hbase:rsgroup in 249 msec 2023-07-16 18:15:54,046 INFO [PEWorker-2] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-07-16 18:15:54,047 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 382 msec 2023-07-16 18:15:54,065 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:44131-0x1016f5907f20000, quorum=127.0.0.1:58951, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-07-16 18:15:54,067 DEBUG [Listener at localhost/33941-EventThread] zookeeper.ZKWatcher(600): master:44131-0x1016f5907f20000, quorum=127.0.0.1:58951, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-07-16 18:15:54,067 DEBUG [Listener at localhost/33941-EventThread] zookeeper.ZKWatcher(600): master:44131-0x1016f5907f20000, quorum=127.0.0.1:58951, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-16 18:15:54,071 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-16 18:15:54,073 INFO [RS-EventLoopGroup-10-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:46272, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-16 18:15:54,078 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=10, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-07-16 18:15:54,092 DEBUG [Listener at localhost/33941-EventThread] zookeeper.ZKWatcher(600): master:44131-0x1016f5907f20000, quorum=127.0.0.1:58951, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-16 18:15:54,105 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=10, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 17 msec 2023-07-16 18:15:54,105 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,44131,1689531352332] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(840): RSGroup table=hbase:rsgroup is online, refreshing cached information 2023-07-16 18:15:54,105 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,44131,1689531352332] rsgroup.RSGroupInfoManagerImpl(534): Refreshing in Online mode. 2023-07-16 18:15:54,110 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=11, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-16 18:15:54,117 DEBUG [Listener at localhost/33941-EventThread] zookeeper.ZKWatcher(600): master:44131-0x1016f5907f20000, quorum=127.0.0.1:58951, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-16 18:15:54,117 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,44131,1689531352332] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 18:15:54,120 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,44131,1689531352332] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-16 18:15:54,121 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,44131,1689531352332] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(823): GroupBasedLoadBalancer is now online 2023-07-16 18:15:54,123 DEBUG [Listener at localhost/33941-EventThread] zookeeper.ZKWatcher(600): master:44131-0x1016f5907f20000, quorum=127.0.0.1:58951, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-16 18:15:54,127 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=11, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 15 msec 2023-07-16 18:15:54,135 DEBUG [Listener at localhost/33941-EventThread] zookeeper.ZKWatcher(600): master:44131-0x1016f5907f20000, quorum=127.0.0.1:58951, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-16 18:15:54,138 DEBUG [Listener at localhost/33941-EventThread] zookeeper.ZKWatcher(600): master:44131-0x1016f5907f20000, quorum=127.0.0.1:58951, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-16 18:15:54,138 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 1.099sec 2023-07-16 18:15:54,138 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(103): Quota table not found. Creating... 2023-07-16 18:15:54,138 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:quota', {NAME => 'q', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'u', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-16 18:15:54,139 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:quota 2023-07-16 18:15:54,139 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(107): Initializing quota support 2023-07-16 18:15:54,141 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_PRE_OPERATION 2023-07-16 18:15:54,142 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-16 18:15:54,143 INFO [master/jenkins-hbase4:0:becomeActiveMaster] namespace.NamespaceStateManager(59): Namespace State Manager started. 2023-07-16 18:15:54,145 DEBUG [HFileArchiver-6] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40765/user/jenkins/test-data/9891f0a6-513d-b33f-4820-41cbcf26319e/.tmp/data/hbase/quota/fcfa53171ffd6fbf416c88299952a593 2023-07-16 18:15:54,146 DEBUG [HFileArchiver-6] backup.HFileArchiver(153): Directory hdfs://localhost:40765/user/jenkins/test-data/9891f0a6-513d-b33f-4820-41cbcf26319e/.tmp/data/hbase/quota/fcfa53171ffd6fbf416c88299952a593 empty. 2023-07-16 18:15:54,147 DEBUG [HFileArchiver-6] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:40765/user/jenkins/test-data/9891f0a6-513d-b33f-4820-41cbcf26319e/.tmp/data/hbase/quota/fcfa53171ffd6fbf416c88299952a593 2023-07-16 18:15:54,147 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived hbase:quota regions 2023-07-16 18:15:54,149 INFO [master/jenkins-hbase4:0:becomeActiveMaster] namespace.NamespaceStateManager(222): Finished updating state of 2 namespaces. 2023-07-16 18:15:54,149 INFO [master/jenkins-hbase4:0:becomeActiveMaster] namespace.NamespaceAuditor(50): NamespaceAuditor started. 2023-07-16 18:15:54,152 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=QuotaObserverChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-16 18:15:54,152 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=QuotaObserverChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-16 18:15:54,153 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-16 18:15:54,153 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-16 18:15:54,153 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,44131,1689531352332-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-16 18:15:54,153 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,44131,1689531352332-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-16 18:15:54,163 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-16 18:15:54,179 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:40765/user/jenkins/test-data/9891f0a6-513d-b33f-4820-41cbcf26319e/.tmp/data/hbase/quota/.tabledesc/.tableinfo.0000000001 2023-07-16 18:15:54,183 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(7675): creating {ENCODED => fcfa53171ffd6fbf416c88299952a593, NAME => 'hbase:quota,,1689531354138.fcfa53171ffd6fbf416c88299952a593.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:quota', {NAME => 'q', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'u', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40765/user/jenkins/test-data/9891f0a6-513d-b33f-4820-41cbcf26319e/.tmp 2023-07-16 18:15:54,196 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(866): Instantiated hbase:quota,,1689531354138.fcfa53171ffd6fbf416c88299952a593.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 18:15:54,196 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1604): Closing fcfa53171ffd6fbf416c88299952a593, disabling compactions & flushes 2023-07-16 18:15:54,196 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1626): Closing region hbase:quota,,1689531354138.fcfa53171ffd6fbf416c88299952a593. 2023-07-16 18:15:54,196 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:quota,,1689531354138.fcfa53171ffd6fbf416c88299952a593. 2023-07-16 18:15:54,196 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:quota,,1689531354138.fcfa53171ffd6fbf416c88299952a593. after waiting 0 ms 2023-07-16 18:15:54,196 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:quota,,1689531354138.fcfa53171ffd6fbf416c88299952a593. 2023-07-16 18:15:54,196 INFO [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1838): Closed hbase:quota,,1689531354138.fcfa53171ffd6fbf416c88299952a593. 2023-07-16 18:15:54,196 DEBUG [RegionOpenAndInit-hbase:quota-pool-0] regionserver.HRegion(1558): Region close journal for fcfa53171ffd6fbf416c88299952a593: 2023-07-16 18:15:54,199 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_ADD_TO_META 2023-07-16 18:15:54,200 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:quota,,1689531354138.fcfa53171ffd6fbf416c88299952a593.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1689531354200"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689531354200"}]},"ts":"1689531354200"} 2023-07-16 18:15:54,201 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-16 18:15:54,202 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-16 18:15:54,202 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:quota","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689531354202"}]},"ts":"1689531354202"} 2023-07-16 18:15:54,203 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=hbase:quota, state=ENABLING in hbase:meta 2023-07-16 18:15:54,208 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-16 18:15:54,208 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-16 18:15:54,208 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-16 18:15:54,208 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-16 18:15:54,208 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-16 18:15:54,209 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:quota, region=fcfa53171ffd6fbf416c88299952a593, ASSIGN}] 2023-07-16 18:15:54,210 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:quota, region=fcfa53171ffd6fbf416c88299952a593, ASSIGN 2023-07-16 18:15:54,210 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:quota, region=fcfa53171ffd6fbf416c88299952a593, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,39753,1689531352528; forceNewPlan=false, retain=false 2023-07-16 18:15:54,226 DEBUG [Listener at localhost/33941] zookeeper.ReadOnlyZKClient(139): Connect 0x5732b517 to 127.0.0.1:58951 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-16 18:15:54,232 DEBUG [Listener at localhost/33941] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2a4e8813, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-16 18:15:54,234 DEBUG [hconnection-0x57f67347-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-16 18:15:54,236 INFO [RS-EventLoopGroup-11-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:58362, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-16 18:15:54,237 INFO [Listener at localhost/33941] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,44131,1689531352332 2023-07-16 18:15:54,238 INFO [Listener at localhost/33941] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 18:15:54,240 DEBUG [Listener at localhost/33941] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-16 18:15:54,241 INFO [RS-EventLoopGroup-8-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:38830, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-16 18:15:54,245 DEBUG [Listener at localhost/33941-EventThread] zookeeper.ZKWatcher(600): master:44131-0x1016f5907f20000, quorum=127.0.0.1:58951, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-07-16 18:15:54,245 DEBUG [Listener at localhost/33941-EventThread] zookeeper.ZKWatcher(600): master:44131-0x1016f5907f20000, quorum=127.0.0.1:58951, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-16 18:15:54,246 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44131] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=false 2023-07-16 18:15:54,247 DEBUG [Listener at localhost/33941] zookeeper.ReadOnlyZKClient(139): Connect 0x4162bfb9 to 127.0.0.1:58951 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-16 18:15:54,269 DEBUG [Listener at localhost/33941] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@256a9099, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-16 18:15:54,270 INFO [Listener at localhost/33941] zookeeper.RecoverableZooKeeper(93): Process identifier=VerifyingRSGroupAdminClient connecting to ZooKeeper ensemble=127.0.0.1:58951 2023-07-16 18:15:54,275 DEBUG [Listener at localhost/33941-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient0x0, quorum=127.0.0.1:58951, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-16 18:15:54,278 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): VerifyingRSGroupAdminClient-0x1016f5907f2000a connected 2023-07-16 18:15:54,279 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44131] master.HMaster$15(3014): Client=jenkins//172.31.14.131 creating {NAME => 'np1', hbase.namespace.quota.maxregions => '5', hbase.namespace.quota.maxtables => '2'} 2023-07-16 18:15:54,282 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44131] procedure2.ProcedureExecutor(1029): Stored pid=14, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=np1 2023-07-16 18:15:54,287 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44131] master.MasterRpcServices(1230): Checking to see if procedure is done pid=14 2023-07-16 18:15:54,294 DEBUG [Listener at localhost/33941-EventThread] zookeeper.ZKWatcher(600): master:44131-0x1016f5907f20000, quorum=127.0.0.1:58951, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-16 18:15:54,297 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=14, state=SUCCESS; CreateNamespaceProcedure, namespace=np1 in 16 msec 2023-07-16 18:15:54,361 INFO [jenkins-hbase4:44131] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-16 18:15:54,363 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=fcfa53171ffd6fbf416c88299952a593, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,39753,1689531352528 2023-07-16 18:15:54,363 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:quota,,1689531354138.fcfa53171ffd6fbf416c88299952a593.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1689531354362"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689531354362"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689531354362"}]},"ts":"1689531354362"} 2023-07-16 18:15:54,364 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=15, ppid=13, state=RUNNABLE; OpenRegionProcedure fcfa53171ffd6fbf416c88299952a593, server=jenkins-hbase4.apache.org,39753,1689531352528}] 2023-07-16 18:15:54,388 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44131] master.MasterRpcServices(1230): Checking to see if procedure is done pid=14 2023-07-16 18:15:54,394 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44131] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'np1:table1', {NAME => 'fam1', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-16 18:15:54,396 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44131] procedure2.ProcedureExecutor(1029): Stored pid=16, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=np1:table1 2023-07-16 18:15:54,398 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=16, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-16 18:15:54,398 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44131] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "np1" qualifier: "table1" procId is: 16 2023-07-16 18:15:54,399 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44131] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-16 18:15:54,400 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 18:15:54,400 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-16 18:15:54,402 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=16, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-16 18:15:54,404 DEBUG [HFileArchiver-5] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40765/user/jenkins/test-data/9891f0a6-513d-b33f-4820-41cbcf26319e/.tmp/data/np1/table1/49e7261639e568e5355c61847319ddc5 2023-07-16 18:15:54,405 DEBUG [HFileArchiver-5] backup.HFileArchiver(153): Directory hdfs://localhost:40765/user/jenkins/test-data/9891f0a6-513d-b33f-4820-41cbcf26319e/.tmp/data/np1/table1/49e7261639e568e5355c61847319ddc5 empty. 2023-07-16 18:15:54,405 DEBUG [HFileArchiver-5] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:40765/user/jenkins/test-data/9891f0a6-513d-b33f-4820-41cbcf26319e/.tmp/data/np1/table1/49e7261639e568e5355c61847319ddc5 2023-07-16 18:15:54,405 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived np1:table1 regions 2023-07-16 18:15:54,418 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:40765/user/jenkins/test-data/9891f0a6-513d-b33f-4820-41cbcf26319e/.tmp/data/np1/table1/.tabledesc/.tableinfo.0000000001 2023-07-16 18:15:54,419 INFO [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(7675): creating {ENCODED => 49e7261639e568e5355c61847319ddc5, NAME => 'np1:table1,,1689531354393.49e7261639e568e5355c61847319ddc5.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='np1:table1', {NAME => 'fam1', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:40765/user/jenkins/test-data/9891f0a6-513d-b33f-4820-41cbcf26319e/.tmp 2023-07-16 18:15:54,500 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44131] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-16 18:15:54,517 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,39753,1689531352528 2023-07-16 18:15:54,517 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-16 18:15:54,519 INFO [RS-EventLoopGroup-9-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:38796, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-16 18:15:54,524 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:quota,,1689531354138.fcfa53171ffd6fbf416c88299952a593. 2023-07-16 18:15:54,524 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => fcfa53171ffd6fbf416c88299952a593, NAME => 'hbase:quota,,1689531354138.fcfa53171ffd6fbf416c88299952a593.', STARTKEY => '', ENDKEY => ''} 2023-07-16 18:15:54,524 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table quota fcfa53171ffd6fbf416c88299952a593 2023-07-16 18:15:54,524 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:quota,,1689531354138.fcfa53171ffd6fbf416c88299952a593.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 18:15:54,525 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for fcfa53171ffd6fbf416c88299952a593 2023-07-16 18:15:54,525 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for fcfa53171ffd6fbf416c88299952a593 2023-07-16 18:15:54,526 INFO [StoreOpener-fcfa53171ffd6fbf416c88299952a593-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family q of region fcfa53171ffd6fbf416c88299952a593 2023-07-16 18:15:54,528 DEBUG [StoreOpener-fcfa53171ffd6fbf416c88299952a593-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40765/user/jenkins/test-data/9891f0a6-513d-b33f-4820-41cbcf26319e/data/hbase/quota/fcfa53171ffd6fbf416c88299952a593/q 2023-07-16 18:15:54,528 DEBUG [StoreOpener-fcfa53171ffd6fbf416c88299952a593-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40765/user/jenkins/test-data/9891f0a6-513d-b33f-4820-41cbcf26319e/data/hbase/quota/fcfa53171ffd6fbf416c88299952a593/q 2023-07-16 18:15:54,529 INFO [StoreOpener-fcfa53171ffd6fbf416c88299952a593-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region fcfa53171ffd6fbf416c88299952a593 columnFamilyName q 2023-07-16 18:15:54,529 INFO [StoreOpener-fcfa53171ffd6fbf416c88299952a593-1] regionserver.HStore(310): Store=fcfa53171ffd6fbf416c88299952a593/q, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 18:15:54,529 INFO [StoreOpener-fcfa53171ffd6fbf416c88299952a593-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family u of region fcfa53171ffd6fbf416c88299952a593 2023-07-16 18:15:54,531 DEBUG [StoreOpener-fcfa53171ffd6fbf416c88299952a593-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40765/user/jenkins/test-data/9891f0a6-513d-b33f-4820-41cbcf26319e/data/hbase/quota/fcfa53171ffd6fbf416c88299952a593/u 2023-07-16 18:15:54,531 DEBUG [StoreOpener-fcfa53171ffd6fbf416c88299952a593-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40765/user/jenkins/test-data/9891f0a6-513d-b33f-4820-41cbcf26319e/data/hbase/quota/fcfa53171ffd6fbf416c88299952a593/u 2023-07-16 18:15:54,531 INFO [StoreOpener-fcfa53171ffd6fbf416c88299952a593-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region fcfa53171ffd6fbf416c88299952a593 columnFamilyName u 2023-07-16 18:15:54,532 INFO [StoreOpener-fcfa53171ffd6fbf416c88299952a593-1] regionserver.HStore(310): Store=fcfa53171ffd6fbf416c88299952a593/u, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 18:15:54,533 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40765/user/jenkins/test-data/9891f0a6-513d-b33f-4820-41cbcf26319e/data/hbase/quota/fcfa53171ffd6fbf416c88299952a593 2023-07-16 18:15:54,533 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40765/user/jenkins/test-data/9891f0a6-513d-b33f-4820-41cbcf26319e/data/hbase/quota/fcfa53171ffd6fbf416c88299952a593 2023-07-16 18:15:54,535 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:quota descriptor;using region.getMemStoreFlushHeapSize/# of families (64.0 M)) instead. 2023-07-16 18:15:54,536 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for fcfa53171ffd6fbf416c88299952a593 2023-07-16 18:15:54,539 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40765/user/jenkins/test-data/9891f0a6-513d-b33f-4820-41cbcf26319e/data/hbase/quota/fcfa53171ffd6fbf416c88299952a593/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-16 18:15:54,540 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened fcfa53171ffd6fbf416c88299952a593; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11560031680, jitterRate=0.07661184668540955}}}, FlushLargeStoresPolicy{flushSizeLowerBound=67108864} 2023-07-16 18:15:54,540 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for fcfa53171ffd6fbf416c88299952a593: 2023-07-16 18:15:54,541 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:quota,,1689531354138.fcfa53171ffd6fbf416c88299952a593., pid=15, masterSystemTime=1689531354517 2023-07-16 18:15:54,545 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:quota,,1689531354138.fcfa53171ffd6fbf416c88299952a593. 2023-07-16 18:15:54,546 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:quota,,1689531354138.fcfa53171ffd6fbf416c88299952a593. 2023-07-16 18:15:54,546 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=fcfa53171ffd6fbf416c88299952a593, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,39753,1689531352528 2023-07-16 18:15:54,547 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:quota,,1689531354138.fcfa53171ffd6fbf416c88299952a593.","families":{"info":[{"qualifier":"regioninfo","vlen":37,"tag":[],"timestamp":"1689531354546"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689531354546"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689531354546"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689531354546"}]},"ts":"1689531354546"} 2023-07-16 18:15:54,556 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=15, resume processing ppid=13 2023-07-16 18:15:54,556 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=15, ppid=13, state=SUCCESS; OpenRegionProcedure fcfa53171ffd6fbf416c88299952a593, server=jenkins-hbase4.apache.org,39753,1689531352528 in 184 msec 2023-07-16 18:15:54,558 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=13, resume processing ppid=12 2023-07-16 18:15:54,558 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=13, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=hbase:quota, region=fcfa53171ffd6fbf416c88299952a593, ASSIGN in 348 msec 2023-07-16 18:15:54,558 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-16 18:15:54,559 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:quota","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689531354559"}]},"ts":"1689531354559"} 2023-07-16 18:15:54,562 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:quota, state=ENABLED in hbase:meta 2023-07-16 18:15:54,565 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:quota execute state=CREATE_TABLE_POST_OPERATION 2023-07-16 18:15:54,566 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; CreateTableProcedure table=hbase:quota in 427 msec 2023-07-16 18:15:54,701 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44131] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-16 18:15:54,835 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(866): Instantiated np1:table1,,1689531354393.49e7261639e568e5355c61847319ddc5.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 18:15:54,835 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1604): Closing 49e7261639e568e5355c61847319ddc5, disabling compactions & flushes 2023-07-16 18:15:54,835 INFO [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1626): Closing region np1:table1,,1689531354393.49e7261639e568e5355c61847319ddc5. 2023-07-16 18:15:54,835 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on np1:table1,,1689531354393.49e7261639e568e5355c61847319ddc5. 2023-07-16 18:15:54,835 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1714): Acquired close lock on np1:table1,,1689531354393.49e7261639e568e5355c61847319ddc5. after waiting 0 ms 2023-07-16 18:15:54,835 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1724): Updates disabled for region np1:table1,,1689531354393.49e7261639e568e5355c61847319ddc5. 2023-07-16 18:15:54,835 INFO [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1838): Closed np1:table1,,1689531354393.49e7261639e568e5355c61847319ddc5. 2023-07-16 18:15:54,835 DEBUG [RegionOpenAndInit-np1:table1-pool-0] regionserver.HRegion(1558): Region close journal for 49e7261639e568e5355c61847319ddc5: 2023-07-16 18:15:54,838 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=16, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_ADD_TO_META 2023-07-16 18:15:54,844 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"np1:table1,,1689531354393.49e7261639e568e5355c61847319ddc5.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689531354844"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689531354844"}]},"ts":"1689531354844"} 2023-07-16 18:15:54,845 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-16 18:15:54,846 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=16, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-16 18:15:54,846 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689531354846"}]},"ts":"1689531354846"} 2023-07-16 18:15:54,848 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=ENABLING in hbase:meta 2023-07-16 18:15:54,851 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-16 18:15:54,852 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-16 18:15:54,852 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-16 18:15:54,852 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-16 18:15:54,852 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-16 18:15:54,852 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=17, ppid=16, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=np1:table1, region=49e7261639e568e5355c61847319ddc5, ASSIGN}] 2023-07-16 18:15:54,854 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=17, ppid=16, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=np1:table1, region=49e7261639e568e5355c61847319ddc5, ASSIGN 2023-07-16 18:15:54,855 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=17, ppid=16, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=np1:table1, region=49e7261639e568e5355c61847319ddc5, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,39753,1689531352528; forceNewPlan=false, retain=false 2023-07-16 18:15:55,002 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44131] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-16 18:15:55,005 INFO [jenkins-hbase4:44131] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-16 18:15:55,006 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=49e7261639e568e5355c61847319ddc5, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,39753,1689531352528 2023-07-16 18:15:55,007 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"np1:table1,,1689531354393.49e7261639e568e5355c61847319ddc5.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689531355006"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689531355006"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689531355006"}]},"ts":"1689531355006"} 2023-07-16 18:15:55,008 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=18, ppid=17, state=RUNNABLE; OpenRegionProcedure 49e7261639e568e5355c61847319ddc5, server=jenkins-hbase4.apache.org,39753,1689531352528}] 2023-07-16 18:15:55,164 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open np1:table1,,1689531354393.49e7261639e568e5355c61847319ddc5. 2023-07-16 18:15:55,164 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 49e7261639e568e5355c61847319ddc5, NAME => 'np1:table1,,1689531354393.49e7261639e568e5355c61847319ddc5.', STARTKEY => '', ENDKEY => ''} 2023-07-16 18:15:55,164 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table table1 49e7261639e568e5355c61847319ddc5 2023-07-16 18:15:55,164 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated np1:table1,,1689531354393.49e7261639e568e5355c61847319ddc5.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 18:15:55,164 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 49e7261639e568e5355c61847319ddc5 2023-07-16 18:15:55,164 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 49e7261639e568e5355c61847319ddc5 2023-07-16 18:15:55,166 INFO [StoreOpener-49e7261639e568e5355c61847319ddc5-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family fam1 of region 49e7261639e568e5355c61847319ddc5 2023-07-16 18:15:55,167 DEBUG [StoreOpener-49e7261639e568e5355c61847319ddc5-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40765/user/jenkins/test-data/9891f0a6-513d-b33f-4820-41cbcf26319e/data/np1/table1/49e7261639e568e5355c61847319ddc5/fam1 2023-07-16 18:15:55,167 DEBUG [StoreOpener-49e7261639e568e5355c61847319ddc5-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:40765/user/jenkins/test-data/9891f0a6-513d-b33f-4820-41cbcf26319e/data/np1/table1/49e7261639e568e5355c61847319ddc5/fam1 2023-07-16 18:15:55,168 INFO [StoreOpener-49e7261639e568e5355c61847319ddc5-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 49e7261639e568e5355c61847319ddc5 columnFamilyName fam1 2023-07-16 18:15:55,168 INFO [StoreOpener-49e7261639e568e5355c61847319ddc5-1] regionserver.HStore(310): Store=49e7261639e568e5355c61847319ddc5/fam1, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 18:15:55,171 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40765/user/jenkins/test-data/9891f0a6-513d-b33f-4820-41cbcf26319e/data/np1/table1/49e7261639e568e5355c61847319ddc5 2023-07-16 18:15:55,172 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:40765/user/jenkins/test-data/9891f0a6-513d-b33f-4820-41cbcf26319e/data/np1/table1/49e7261639e568e5355c61847319ddc5 2023-07-16 18:15:55,175 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 49e7261639e568e5355c61847319ddc5 2023-07-16 18:15:55,179 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40765/user/jenkins/test-data/9891f0a6-513d-b33f-4820-41cbcf26319e/data/np1/table1/49e7261639e568e5355c61847319ddc5/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-16 18:15:55,179 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 49e7261639e568e5355c61847319ddc5; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11351451520, jitterRate=0.0571863055229187}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 18:15:55,179 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 49e7261639e568e5355c61847319ddc5: 2023-07-16 18:15:55,180 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for np1:table1,,1689531354393.49e7261639e568e5355c61847319ddc5., pid=18, masterSystemTime=1689531355159 2023-07-16 18:15:55,184 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for np1:table1,,1689531354393.49e7261639e568e5355c61847319ddc5. 2023-07-16 18:15:55,184 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened np1:table1,,1689531354393.49e7261639e568e5355c61847319ddc5. 2023-07-16 18:15:55,184 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=49e7261639e568e5355c61847319ddc5, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,39753,1689531352528 2023-07-16 18:15:55,184 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"np1:table1,,1689531354393.49e7261639e568e5355c61847319ddc5.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689531355184"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689531355184"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689531355184"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689531355184"}]},"ts":"1689531355184"} 2023-07-16 18:15:55,187 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=18, resume processing ppid=17 2023-07-16 18:15:55,187 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=18, ppid=17, state=SUCCESS; OpenRegionProcedure 49e7261639e568e5355c61847319ddc5, server=jenkins-hbase4.apache.org,39753,1689531352528 in 177 msec 2023-07-16 18:15:55,191 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=17, resume processing ppid=16 2023-07-16 18:15:55,192 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=17, ppid=16, state=SUCCESS; TransitRegionStateProcedure table=np1:table1, region=49e7261639e568e5355c61847319ddc5, ASSIGN in 335 msec 2023-07-16 18:15:55,192 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=16, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-16 18:15:55,192 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689531355192"}]},"ts":"1689531355192"} 2023-07-16 18:15:55,193 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=ENABLED in hbase:meta 2023-07-16 18:15:55,195 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=16, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=np1:table1 execute state=CREATE_TABLE_POST_OPERATION 2023-07-16 18:15:55,197 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=16, state=SUCCESS; CreateTableProcedure table=np1:table1 in 801 msec 2023-07-16 18:15:55,503 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44131] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-16 18:15:55,504 INFO [Listener at localhost/33941] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: np1:table1, procId: 16 completed 2023-07-16 18:15:55,505 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44131] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 'np1:table2', {NAME => 'fam1', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-16 18:15:55,506 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44131] procedure2.ProcedureExecutor(1029): Stored pid=19, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=np1:table2 2023-07-16 18:15:55,508 INFO [PEWorker-1] procedure.CreateTableProcedure(80): pid=19, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=np1:table2 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-16 18:15:55,510 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44131] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "np1" qualifier: "table2" procId is: 19 2023-07-16 18:15:55,510 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44131] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-16 18:15:55,526 DEBUG [PEWorker-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-16 18:15:55,527 INFO [RS-EventLoopGroup-9-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:38810, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-16 18:15:55,531 INFO [PEWorker-1] procedure2.ProcedureExecutor(1528): Rolled back pid=19, state=ROLLEDBACK, exception=org.apache.hadoop.hbase.quotas.QuotaExceededException via master-create-table:org.apache.hadoop.hbase.quotas.QuotaExceededException: The table np1:table2 is not allowed to have 6 regions. The total number of regions permitted is only 5, while current region count is 1. This may be transient, please retry later if there are any ongoing split operations in the namespace.; CreateTableProcedure table=np1:table2 exec-time=25 msec 2023-07-16 18:15:55,611 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44131] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-16 18:15:55,614 INFO [Listener at localhost/33941] client.HBaseAdmin$TableFuture(3548): Operation: CREATE, Table Name: np1:table2, procId: 19 failed with The table np1:table2 is not allowed to have 6 regions. The total number of regions permitted is only 5, while current region count is 1. This may be transient, please retry later if there are any ongoing split operations in the namespace. 2023-07-16 18:15:55,615 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44131] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 18:15:55,615 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44131] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 18:15:55,616 INFO [Listener at localhost/33941] client.HBaseAdmin$15(890): Started disable of np1:table1 2023-07-16 18:15:55,617 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44131] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable np1:table1 2023-07-16 18:15:55,617 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44131] procedure2.ProcedureExecutor(1029): Stored pid=20, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=np1:table1 2023-07-16 18:15:55,619 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44131] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-16 18:15:55,620 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689531355620"}]},"ts":"1689531355620"} 2023-07-16 18:15:55,621 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=DISABLING in hbase:meta 2023-07-16 18:15:55,622 INFO [PEWorker-2] procedure.DisableTableProcedure(293): Set np1:table1 to state=DISABLING 2023-07-16 18:15:55,623 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=21, ppid=20, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=np1:table1, region=49e7261639e568e5355c61847319ddc5, UNASSIGN}] 2023-07-16 18:15:55,624 INFO [PEWorker-4] procedure.MasterProcedureScheduler(727): Took xlock for pid=21, ppid=20, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=np1:table1, region=49e7261639e568e5355c61847319ddc5, UNASSIGN 2023-07-16 18:15:55,624 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=21 updating hbase:meta row=49e7261639e568e5355c61847319ddc5, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,39753,1689531352528 2023-07-16 18:15:55,624 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"np1:table1,,1689531354393.49e7261639e568e5355c61847319ddc5.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689531355624"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689531355624"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689531355624"}]},"ts":"1689531355624"} 2023-07-16 18:15:55,625 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=22, ppid=21, state=RUNNABLE; CloseRegionProcedure 49e7261639e568e5355c61847319ddc5, server=jenkins-hbase4.apache.org,39753,1689531352528}] 2023-07-16 18:15:55,720 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44131] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-16 18:15:55,777 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 49e7261639e568e5355c61847319ddc5 2023-07-16 18:15:55,778 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 49e7261639e568e5355c61847319ddc5, disabling compactions & flushes 2023-07-16 18:15:55,779 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region np1:table1,,1689531354393.49e7261639e568e5355c61847319ddc5. 2023-07-16 18:15:55,779 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on np1:table1,,1689531354393.49e7261639e568e5355c61847319ddc5. 2023-07-16 18:15:55,779 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on np1:table1,,1689531354393.49e7261639e568e5355c61847319ddc5. after waiting 0 ms 2023-07-16 18:15:55,779 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region np1:table1,,1689531354393.49e7261639e568e5355c61847319ddc5. 2023-07-16 18:15:55,782 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40765/user/jenkins/test-data/9891f0a6-513d-b33f-4820-41cbcf26319e/data/np1/table1/49e7261639e568e5355c61847319ddc5/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-16 18:15:55,783 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed np1:table1,,1689531354393.49e7261639e568e5355c61847319ddc5. 2023-07-16 18:15:55,783 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 49e7261639e568e5355c61847319ddc5: 2023-07-16 18:15:55,784 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 49e7261639e568e5355c61847319ddc5 2023-07-16 18:15:55,785 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=21 updating hbase:meta row=49e7261639e568e5355c61847319ddc5, regionState=CLOSED 2023-07-16 18:15:55,785 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"np1:table1,,1689531354393.49e7261639e568e5355c61847319ddc5.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689531355785"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689531355785"}]},"ts":"1689531355785"} 2023-07-16 18:15:55,787 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=22, resume processing ppid=21 2023-07-16 18:15:55,787 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=22, ppid=21, state=SUCCESS; CloseRegionProcedure 49e7261639e568e5355c61847319ddc5, server=jenkins-hbase4.apache.org,39753,1689531352528 in 161 msec 2023-07-16 18:15:55,789 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=21, resume processing ppid=20 2023-07-16 18:15:55,789 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=21, ppid=20, state=SUCCESS; TransitRegionStateProcedure table=np1:table1, region=49e7261639e568e5355c61847319ddc5, UNASSIGN in 164 msec 2023-07-16 18:15:55,789 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689531355789"}]},"ts":"1689531355789"} 2023-07-16 18:15:55,790 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=np1:table1, state=DISABLED in hbase:meta 2023-07-16 18:15:55,792 INFO [PEWorker-2] procedure.DisableTableProcedure(305): Set np1:table1 to state=DISABLED 2023-07-16 18:15:55,795 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=20, state=SUCCESS; DisableTableProcedure table=np1:table1 in 176 msec 2023-07-16 18:15:55,921 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44131] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-16 18:15:55,921 INFO [Listener at localhost/33941] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: np1:table1, procId: 20 completed 2023-07-16 18:15:55,922 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44131] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete np1:table1 2023-07-16 18:15:55,923 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44131] procedure2.ProcedureExecutor(1029): Stored pid=23, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=np1:table1 2023-07-16 18:15:55,925 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=23, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=np1:table1 2023-07-16 18:15:55,925 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44131] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 'np1:table1' from rsgroup 'default' 2023-07-16 18:15:55,925 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=23, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=np1:table1 2023-07-16 18:15:55,927 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44131] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 18:15:55,927 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44131] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-16 18:15:55,929 DEBUG [HFileArchiver-2] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:40765/user/jenkins/test-data/9891f0a6-513d-b33f-4820-41cbcf26319e/.tmp/data/np1/table1/49e7261639e568e5355c61847319ddc5 2023-07-16 18:15:55,931 DEBUG [HFileArchiver-2] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:40765/user/jenkins/test-data/9891f0a6-513d-b33f-4820-41cbcf26319e/.tmp/data/np1/table1/49e7261639e568e5355c61847319ddc5/fam1, FileablePath, hdfs://localhost:40765/user/jenkins/test-data/9891f0a6-513d-b33f-4820-41cbcf26319e/.tmp/data/np1/table1/49e7261639e568e5355c61847319ddc5/recovered.edits] 2023-07-16 18:15:55,931 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44131] master.MasterRpcServices(1230): Checking to see if procedure is done pid=23 2023-07-16 18:15:55,935 DEBUG [HFileArchiver-2] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:40765/user/jenkins/test-data/9891f0a6-513d-b33f-4820-41cbcf26319e/.tmp/data/np1/table1/49e7261639e568e5355c61847319ddc5/recovered.edits/4.seqid to hdfs://localhost:40765/user/jenkins/test-data/9891f0a6-513d-b33f-4820-41cbcf26319e/archive/data/np1/table1/49e7261639e568e5355c61847319ddc5/recovered.edits/4.seqid 2023-07-16 18:15:55,935 DEBUG [HFileArchiver-2] backup.HFileArchiver(596): Deleted hdfs://localhost:40765/user/jenkins/test-data/9891f0a6-513d-b33f-4820-41cbcf26319e/.tmp/data/np1/table1/49e7261639e568e5355c61847319ddc5 2023-07-16 18:15:55,935 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived np1:table1 regions 2023-07-16 18:15:55,937 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=23, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=np1:table1 2023-07-16 18:15:55,939 WARN [PEWorker-4] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of np1:table1 from hbase:meta 2023-07-16 18:15:55,940 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(421): Removing 'np1:table1' descriptor. 2023-07-16 18:15:55,947 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=23, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=np1:table1 2023-07-16 18:15:55,947 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(411): Removing 'np1:table1' from region states. 2023-07-16 18:15:55,947 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"np1:table1,,1689531354393.49e7261639e568e5355c61847319ddc5.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689531355947"}]},"ts":"9223372036854775807"} 2023-07-16 18:15:55,948 INFO [PEWorker-4] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-16 18:15:55,949 DEBUG [PEWorker-4] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 49e7261639e568e5355c61847319ddc5, NAME => 'np1:table1,,1689531354393.49e7261639e568e5355c61847319ddc5.', STARTKEY => '', ENDKEY => ''}] 2023-07-16 18:15:55,949 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(415): Marking 'np1:table1' as deleted. 2023-07-16 18:15:55,949 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"np1:table1","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689531355949"}]},"ts":"9223372036854775807"} 2023-07-16 18:15:55,950 INFO [PEWorker-4] hbase.MetaTableAccessor(1658): Deleted table np1:table1 state from META 2023-07-16 18:15:55,951 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(130): Finished pid=23, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=np1:table1 2023-07-16 18:15:55,952 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=23, state=SUCCESS; DeleteTableProcedure table=np1:table1 in 30 msec 2023-07-16 18:15:56,032 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44131] master.MasterRpcServices(1230): Checking to see if procedure is done pid=23 2023-07-16 18:15:56,032 INFO [Listener at localhost/33941] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: np1:table1, procId: 23 completed 2023-07-16 18:15:56,037 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44131] master.HMaster$17(3086): Client=jenkins//172.31.14.131 delete np1 2023-07-16 18:15:56,043 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44131] procedure2.ProcedureExecutor(1029): Stored pid=24, state=RUNNABLE:DELETE_NAMESPACE_PREPARE; DeleteNamespaceProcedure, namespace=np1 2023-07-16 18:15:56,045 INFO [PEWorker-3] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_PREPARE, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-16 18:15:56,048 INFO [PEWorker-3] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_DELETE_FROM_NS_TABLE, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-16 18:15:56,050 INFO [PEWorker-3] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_FROM_ZK, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-16 18:15:56,050 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44131] master.MasterRpcServices(1230): Checking to see if procedure is done pid=24 2023-07-16 18:15:56,051 DEBUG [Listener at localhost/33941-EventThread] zookeeper.ZKWatcher(600): master:44131-0x1016f5907f20000, quorum=127.0.0.1:58951, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/namespace/np1 2023-07-16 18:15:56,051 DEBUG [Listener at localhost/33941-EventThread] zookeeper.ZKWatcher(600): master:44131-0x1016f5907f20000, quorum=127.0.0.1:58951, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-16 18:15:56,051 INFO [PEWorker-3] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_DELETE_DIRECTORIES, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-16 18:15:56,053 INFO [PEWorker-3] procedure.DeleteNamespaceProcedure(73): pid=24, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_NAMESPACE_QUOTA, locked=true; DeleteNamespaceProcedure, namespace=np1 2023-07-16 18:15:56,054 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=24, state=SUCCESS; DeleteNamespaceProcedure, namespace=np1 in 16 msec 2023-07-16 18:15:56,151 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=44131] master.MasterRpcServices(1230): Checking to see if procedure is done pid=24 2023-07-16 18:15:56,152 INFO [Listener at localhost/33941] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-07-16 18:15:56,152 INFO [Listener at localhost/33941] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-16 18:15:56,152 DEBUG [Listener at localhost/33941] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x5732b517 to 127.0.0.1:58951 2023-07-16 18:15:56,152 DEBUG [Listener at localhost/33941] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-16 18:15:56,152 DEBUG [Listener at localhost/33941] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-16 18:15:56,153 DEBUG [Listener at localhost/33941] util.JVMClusterUtil(257): Found active master hash=1937350085, stopped=false 2023-07-16 18:15:56,153 DEBUG [Listener at localhost/33941] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-16 18:15:56,153 DEBUG [Listener at localhost/33941] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-16 18:15:56,153 DEBUG [Listener at localhost/33941] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.quotas.MasterQuotasObserver 2023-07-16 18:15:56,153 INFO [Listener at localhost/33941] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,44131,1689531352332 2023-07-16 18:15:56,155 DEBUG [Listener at localhost/33941-EventThread] zookeeper.ZKWatcher(600): master:44131-0x1016f5907f20000, quorum=127.0.0.1:58951, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-16 18:15:56,155 DEBUG [Listener at localhost/33941-EventThread] zookeeper.ZKWatcher(600): regionserver:36093-0x1016f5907f20002, quorum=127.0.0.1:58951, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-16 18:15:56,155 DEBUG [Listener at localhost/33941-EventThread] zookeeper.ZKWatcher(600): master:44131-0x1016f5907f20000, quorum=127.0.0.1:58951, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-16 18:15:56,155 DEBUG [Listener at localhost/33941-EventThread] zookeeper.ZKWatcher(600): regionserver:39753-0x1016f5907f20001, quorum=127.0.0.1:58951, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-16 18:15:56,155 INFO [Listener at localhost/33941] procedure2.ProcedureExecutor(629): Stopping 2023-07-16 18:15:56,155 DEBUG [Listener at localhost/33941-EventThread] zookeeper.ZKWatcher(600): regionserver:34637-0x1016f5907f20003, quorum=127.0.0.1:58951, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-16 18:15:56,157 DEBUG [Listener at localhost/33941] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x5baa060a to 127.0.0.1:58951 2023-07-16 18:15:56,157 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:44131-0x1016f5907f20000, quorum=127.0.0.1:58951, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-16 18:15:56,157 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:34637-0x1016f5907f20003, quorum=127.0.0.1:58951, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-16 18:15:56,157 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:39753-0x1016f5907f20001, quorum=127.0.0.1:58951, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-16 18:15:56,157 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:36093-0x1016f5907f20002, quorum=127.0.0.1:58951, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-16 18:15:56,157 DEBUG [Listener at localhost/33941] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-16 18:15:56,158 INFO [Listener at localhost/33941] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,39753,1689531352528' ***** 2023-07-16 18:15:56,158 INFO [Listener at localhost/33941] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-16 18:15:56,158 INFO [Listener at localhost/33941] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,36093,1689531352707' ***** 2023-07-16 18:15:56,158 INFO [RS:0;jenkins-hbase4:39753] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-16 18:15:56,158 INFO [Listener at localhost/33941] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-16 18:15:56,158 INFO [Listener at localhost/33941] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,34637,1689531352868' ***** 2023-07-16 18:15:56,158 INFO [RS:1;jenkins-hbase4:36093] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-16 18:15:56,158 INFO [Listener at localhost/33941] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-16 18:15:56,166 INFO [RS:2;jenkins-hbase4:34637] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-16 18:15:56,175 INFO [RS:0;jenkins-hbase4:39753] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@19a55b52{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-16 18:15:56,175 INFO [RS:1;jenkins-hbase4:36093] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@44267f0{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-16 18:15:56,175 INFO [RS:2;jenkins-hbase4:34637] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@5c5ad983{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-16 18:15:56,175 INFO [RS:1;jenkins-hbase4:36093] server.AbstractConnector(383): Stopped ServerConnector@82d2458{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-16 18:15:56,175 INFO [RS:2;jenkins-hbase4:34637] server.AbstractConnector(383): Stopped ServerConnector@789ac99f{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-16 18:15:56,175 INFO [RS:0;jenkins-hbase4:39753] server.AbstractConnector(383): Stopped ServerConnector@62fe5a63{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-16 18:15:56,175 INFO [RS:2;jenkins-hbase4:34637] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-16 18:15:56,175 INFO [RS:0;jenkins-hbase4:39753] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-16 18:15:56,175 INFO [RS:1;jenkins-hbase4:36093] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-16 18:15:56,176 INFO [RS:2;jenkins-hbase4:34637] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@6a2ebbcc{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-16 18:15:56,178 INFO [RS:1;jenkins-hbase4:36093] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@7b6d1808{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-16 18:15:56,178 INFO [RS:2;jenkins-hbase4:34637] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@1782cf7b{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8ccd5c03-7dd0-0741-9334-1f163324ab0d/hadoop.log.dir/,STOPPED} 2023-07-16 18:15:56,178 INFO [RS:1;jenkins-hbase4:36093] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@1ac8173c{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8ccd5c03-7dd0-0741-9334-1f163324ab0d/hadoop.log.dir/,STOPPED} 2023-07-16 18:15:56,178 INFO [RS:0;jenkins-hbase4:39753] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@14dd322{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-16 18:15:56,179 INFO [RS:0;jenkins-hbase4:39753] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@1cc8b87a{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8ccd5c03-7dd0-0741-9334-1f163324ab0d/hadoop.log.dir/,STOPPED} 2023-07-16 18:15:56,180 INFO [RS:0;jenkins-hbase4:39753] regionserver.HeapMemoryManager(220): Stopping 2023-07-16 18:15:56,180 INFO [RS:0;jenkins-hbase4:39753] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-16 18:15:56,180 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-16 18:15:56,180 INFO [RS:0;jenkins-hbase4:39753] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-16 18:15:56,180 INFO [RS:0;jenkins-hbase4:39753] regionserver.HRegionServer(3305): Received CLOSE for fcfa53171ffd6fbf416c88299952a593 2023-07-16 18:15:56,181 INFO [RS:0;jenkins-hbase4:39753] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,39753,1689531352528 2023-07-16 18:15:56,181 INFO [RS:2;jenkins-hbase4:34637] regionserver.HeapMemoryManager(220): Stopping 2023-07-16 18:15:56,181 INFO [RS:1;jenkins-hbase4:36093] regionserver.HeapMemoryManager(220): Stopping 2023-07-16 18:15:56,182 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-16 18:15:56,182 INFO [RS:1;jenkins-hbase4:36093] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-16 18:15:56,182 INFO [RS:2;jenkins-hbase4:34637] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-16 18:15:56,183 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-16 18:15:56,182 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing fcfa53171ffd6fbf416c88299952a593, disabling compactions & flushes 2023-07-16 18:15:56,181 DEBUG [RS:0;jenkins-hbase4:39753] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x74cca31d to 127.0.0.1:58951 2023-07-16 18:15:56,183 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:quota,,1689531354138.fcfa53171ffd6fbf416c88299952a593. 2023-07-16 18:15:56,183 DEBUG [RS:0;jenkins-hbase4:39753] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-16 18:15:56,183 INFO [RS:2;jenkins-hbase4:34637] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-16 18:15:56,183 INFO [RS:1;jenkins-hbase4:36093] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-16 18:15:56,184 INFO [RS:2;jenkins-hbase4:34637] regionserver.HRegionServer(3305): Received CLOSE for 285248e8dfba1a8879a256a37f9acc4d 2023-07-16 18:15:56,184 INFO [RS:1;jenkins-hbase4:36093] regionserver.HRegionServer(3305): Received CLOSE for 85d499a4717a980e9af0a5a4eb9fddf2 2023-07-16 18:15:56,184 INFO [RS:0;jenkins-hbase4:39753] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-16 18:15:56,183 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:quota,,1689531354138.fcfa53171ffd6fbf416c88299952a593. 2023-07-16 18:15:56,184 INFO [RS:2;jenkins-hbase4:34637] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,34637,1689531352868 2023-07-16 18:15:56,184 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:quota,,1689531354138.fcfa53171ffd6fbf416c88299952a593. after waiting 0 ms 2023-07-16 18:15:56,184 DEBUG [RS:2;jenkins-hbase4:34637] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x11dcc035 to 127.0.0.1:58951 2023-07-16 18:15:56,184 DEBUG [RS:0;jenkins-hbase4:39753] regionserver.HRegionServer(1478): Online Regions={fcfa53171ffd6fbf416c88299952a593=hbase:quota,,1689531354138.fcfa53171ffd6fbf416c88299952a593.} 2023-07-16 18:15:56,185 DEBUG [RS:2;jenkins-hbase4:34637] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-16 18:15:56,187 DEBUG [RS:0;jenkins-hbase4:39753] regionserver.HRegionServer(1504): Waiting on fcfa53171ffd6fbf416c88299952a593 2023-07-16 18:15:56,187 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 85d499a4717a980e9af0a5a4eb9fddf2, disabling compactions & flushes 2023-07-16 18:15:56,185 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 285248e8dfba1a8879a256a37f9acc4d, disabling compactions & flushes 2023-07-16 18:15:56,184 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:quota,,1689531354138.fcfa53171ffd6fbf416c88299952a593. 2023-07-16 18:15:56,184 INFO [RS:1;jenkins-hbase4:36093] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,36093,1689531352707 2023-07-16 18:15:56,188 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689531353795.285248e8dfba1a8879a256a37f9acc4d. 2023-07-16 18:15:56,188 DEBUG [RS:1;jenkins-hbase4:36093] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x6decd915 to 127.0.0.1:58951 2023-07-16 18:15:56,188 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689531353663.85d499a4717a980e9af0a5a4eb9fddf2. 2023-07-16 18:15:56,187 INFO [RS:2;jenkins-hbase4:34637] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-16 18:15:56,188 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689531353663.85d499a4717a980e9af0a5a4eb9fddf2. 2023-07-16 18:15:56,188 DEBUG [RS:1;jenkins-hbase4:36093] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-16 18:15:56,188 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689531353795.285248e8dfba1a8879a256a37f9acc4d. 2023-07-16 18:15:56,188 INFO [RS:1;jenkins-hbase4:36093] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-16 18:15:56,188 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689531353663.85d499a4717a980e9af0a5a4eb9fddf2. after waiting 0 ms 2023-07-16 18:15:56,188 INFO [RS:2;jenkins-hbase4:34637] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-16 18:15:56,188 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689531353663.85d499a4717a980e9af0a5a4eb9fddf2. 2023-07-16 18:15:56,188 DEBUG [RS:1;jenkins-hbase4:36093] regionserver.HRegionServer(1478): Online Regions={85d499a4717a980e9af0a5a4eb9fddf2=hbase:namespace,,1689531353663.85d499a4717a980e9af0a5a4eb9fddf2.} 2023-07-16 18:15:56,188 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 85d499a4717a980e9af0a5a4eb9fddf2 1/1 column families, dataSize=215 B heapSize=776 B 2023-07-16 18:15:56,188 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689531353795.285248e8dfba1a8879a256a37f9acc4d. after waiting 0 ms 2023-07-16 18:15:56,189 DEBUG [RS:1;jenkins-hbase4:36093] regionserver.HRegionServer(1504): Waiting on 85d499a4717a980e9af0a5a4eb9fddf2 2023-07-16 18:15:56,188 INFO [RS:2;jenkins-hbase4:34637] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-16 18:15:56,189 INFO [RS:2;jenkins-hbase4:34637] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-16 18:15:56,189 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689531353795.285248e8dfba1a8879a256a37f9acc4d. 2023-07-16 18:15:56,189 INFO [RS:2;jenkins-hbase4:34637] regionserver.HRegionServer(1474): Waiting on 2 regions to close 2023-07-16 18:15:56,189 DEBUG [RS:2;jenkins-hbase4:34637] regionserver.HRegionServer(1478): Online Regions={1588230740=hbase:meta,,1.1588230740, 285248e8dfba1a8879a256a37f9acc4d=hbase:rsgroup,,1689531353795.285248e8dfba1a8879a256a37f9acc4d.} 2023-07-16 18:15:56,189 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 285248e8dfba1a8879a256a37f9acc4d 1/1 column families, dataSize=585 B heapSize=1.04 KB 2023-07-16 18:15:56,189 DEBUG [RS:2;jenkins-hbase4:34637] regionserver.HRegionServer(1504): Waiting on 1588230740, 285248e8dfba1a8879a256a37f9acc4d 2023-07-16 18:15:56,189 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-16 18:15:56,189 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-16 18:15:56,189 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-16 18:15:56,189 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-16 18:15:56,189 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-16 18:15:56,189 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=5.89 KB heapSize=11.09 KB 2023-07-16 18:15:56,193 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40765/user/jenkins/test-data/9891f0a6-513d-b33f-4820-41cbcf26319e/data/hbase/quota/fcfa53171ffd6fbf416c88299952a593/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-16 18:15:56,194 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:quota,,1689531354138.fcfa53171ffd6fbf416c88299952a593. 2023-07-16 18:15:56,194 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for fcfa53171ffd6fbf416c88299952a593: 2023-07-16 18:15:56,194 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:quota,,1689531354138.fcfa53171ffd6fbf416c88299952a593. 2023-07-16 18:15:56,201 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-16 18:15:56,216 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-16 18:15:56,220 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=585 B at sequenceid=7 (bloomFilter=true), to=hdfs://localhost:40765/user/jenkins/test-data/9891f0a6-513d-b33f-4820-41cbcf26319e/data/hbase/rsgroup/285248e8dfba1a8879a256a37f9acc4d/.tmp/m/4fc9cc33e7a542bfa61ff2a19e922885 2023-07-16 18:15:56,220 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=215 B at sequenceid=8 (bloomFilter=true), to=hdfs://localhost:40765/user/jenkins/test-data/9891f0a6-513d-b33f-4820-41cbcf26319e/data/hbase/namespace/85d499a4717a980e9af0a5a4eb9fddf2/.tmp/info/a4e8de13b8a441f8b4682d58c26559aa 2023-07-16 18:15:56,224 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=5.26 KB at sequenceid=31 (bloomFilter=false), to=hdfs://localhost:40765/user/jenkins/test-data/9891f0a6-513d-b33f-4820-41cbcf26319e/data/hbase/meta/1588230740/.tmp/info/09164ed555ae4fd9bf85ece785605508 2023-07-16 18:15:56,224 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-16 18:15:56,229 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for a4e8de13b8a441f8b4682d58c26559aa 2023-07-16 18:15:56,230 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40765/user/jenkins/test-data/9891f0a6-513d-b33f-4820-41cbcf26319e/data/hbase/rsgroup/285248e8dfba1a8879a256a37f9acc4d/.tmp/m/4fc9cc33e7a542bfa61ff2a19e922885 as hdfs://localhost:40765/user/jenkins/test-data/9891f0a6-513d-b33f-4820-41cbcf26319e/data/hbase/rsgroup/285248e8dfba1a8879a256a37f9acc4d/m/4fc9cc33e7a542bfa61ff2a19e922885 2023-07-16 18:15:56,230 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40765/user/jenkins/test-data/9891f0a6-513d-b33f-4820-41cbcf26319e/data/hbase/namespace/85d499a4717a980e9af0a5a4eb9fddf2/.tmp/info/a4e8de13b8a441f8b4682d58c26559aa as hdfs://localhost:40765/user/jenkins/test-data/9891f0a6-513d-b33f-4820-41cbcf26319e/data/hbase/namespace/85d499a4717a980e9af0a5a4eb9fddf2/info/a4e8de13b8a441f8b4682d58c26559aa 2023-07-16 18:15:56,235 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 09164ed555ae4fd9bf85ece785605508 2023-07-16 18:15:56,238 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:40765/user/jenkins/test-data/9891f0a6-513d-b33f-4820-41cbcf26319e/data/hbase/rsgroup/285248e8dfba1a8879a256a37f9acc4d/m/4fc9cc33e7a542bfa61ff2a19e922885, entries=1, sequenceid=7, filesize=4.9 K 2023-07-16 18:15:56,239 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for a4e8de13b8a441f8b4682d58c26559aa 2023-07-16 18:15:56,239 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:40765/user/jenkins/test-data/9891f0a6-513d-b33f-4820-41cbcf26319e/data/hbase/namespace/85d499a4717a980e9af0a5a4eb9fddf2/info/a4e8de13b8a441f8b4682d58c26559aa, entries=3, sequenceid=8, filesize=5.0 K 2023-07-16 18:15:56,241 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~215 B/215, heapSize ~760 B/760, currentSize=0 B/0 for 85d499a4717a980e9af0a5a4eb9fddf2 in 53ms, sequenceid=8, compaction requested=false 2023-07-16 18:15:56,241 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:namespace' 2023-07-16 18:15:56,242 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~585 B/585, heapSize ~1.02 KB/1048, currentSize=0 B/0 for 285248e8dfba1a8879a256a37f9acc4d in 53ms, sequenceid=7, compaction requested=false 2023-07-16 18:15:56,242 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:rsgroup' 2023-07-16 18:15:56,259 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=90 B at sequenceid=31 (bloomFilter=false), to=hdfs://localhost:40765/user/jenkins/test-data/9891f0a6-513d-b33f-4820-41cbcf26319e/data/hbase/meta/1588230740/.tmp/rep_barrier/b216c5886eb64632b1a43f355cae534b 2023-07-16 18:15:56,259 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40765/user/jenkins/test-data/9891f0a6-513d-b33f-4820-41cbcf26319e/data/hbase/rsgroup/285248e8dfba1a8879a256a37f9acc4d/recovered.edits/10.seqid, newMaxSeqId=10, maxSeqId=1 2023-07-16 18:15:56,259 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40765/user/jenkins/test-data/9891f0a6-513d-b33f-4820-41cbcf26319e/data/hbase/namespace/85d499a4717a980e9af0a5a4eb9fddf2/recovered.edits/11.seqid, newMaxSeqId=11, maxSeqId=1 2023-07-16 18:15:56,259 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-16 18:15:56,260 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689531353795.285248e8dfba1a8879a256a37f9acc4d. 2023-07-16 18:15:56,260 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 285248e8dfba1a8879a256a37f9acc4d: 2023-07-16 18:15:56,260 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:rsgroup,,1689531353795.285248e8dfba1a8879a256a37f9acc4d. 2023-07-16 18:15:56,261 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689531353663.85d499a4717a980e9af0a5a4eb9fddf2. 2023-07-16 18:15:56,261 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 85d499a4717a980e9af0a5a4eb9fddf2: 2023-07-16 18:15:56,261 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1689531353663.85d499a4717a980e9af0a5a4eb9fddf2. 2023-07-16 18:15:56,265 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for b216c5886eb64632b1a43f355cae534b 2023-07-16 18:15:56,275 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=562 B at sequenceid=31 (bloomFilter=false), to=hdfs://localhost:40765/user/jenkins/test-data/9891f0a6-513d-b33f-4820-41cbcf26319e/data/hbase/meta/1588230740/.tmp/table/8be1cb626dfc4bcfaa0dec5219cef081 2023-07-16 18:15:56,280 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 8be1cb626dfc4bcfaa0dec5219cef081 2023-07-16 18:15:56,281 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40765/user/jenkins/test-data/9891f0a6-513d-b33f-4820-41cbcf26319e/data/hbase/meta/1588230740/.tmp/info/09164ed555ae4fd9bf85ece785605508 as hdfs://localhost:40765/user/jenkins/test-data/9891f0a6-513d-b33f-4820-41cbcf26319e/data/hbase/meta/1588230740/info/09164ed555ae4fd9bf85ece785605508 2023-07-16 18:15:56,286 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 09164ed555ae4fd9bf85ece785605508 2023-07-16 18:15:56,286 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:40765/user/jenkins/test-data/9891f0a6-513d-b33f-4820-41cbcf26319e/data/hbase/meta/1588230740/info/09164ed555ae4fd9bf85ece785605508, entries=32, sequenceid=31, filesize=8.5 K 2023-07-16 18:15:56,287 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40765/user/jenkins/test-data/9891f0a6-513d-b33f-4820-41cbcf26319e/data/hbase/meta/1588230740/.tmp/rep_barrier/b216c5886eb64632b1a43f355cae534b as hdfs://localhost:40765/user/jenkins/test-data/9891f0a6-513d-b33f-4820-41cbcf26319e/data/hbase/meta/1588230740/rep_barrier/b216c5886eb64632b1a43f355cae534b 2023-07-16 18:15:56,294 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for b216c5886eb64632b1a43f355cae534b 2023-07-16 18:15:56,294 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:40765/user/jenkins/test-data/9891f0a6-513d-b33f-4820-41cbcf26319e/data/hbase/meta/1588230740/rep_barrier/b216c5886eb64632b1a43f355cae534b, entries=1, sequenceid=31, filesize=4.9 K 2023-07-16 18:15:56,295 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40765/user/jenkins/test-data/9891f0a6-513d-b33f-4820-41cbcf26319e/data/hbase/meta/1588230740/.tmp/table/8be1cb626dfc4bcfaa0dec5219cef081 as hdfs://localhost:40765/user/jenkins/test-data/9891f0a6-513d-b33f-4820-41cbcf26319e/data/hbase/meta/1588230740/table/8be1cb626dfc4bcfaa0dec5219cef081 2023-07-16 18:15:56,301 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 8be1cb626dfc4bcfaa0dec5219cef081 2023-07-16 18:15:56,301 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:40765/user/jenkins/test-data/9891f0a6-513d-b33f-4820-41cbcf26319e/data/hbase/meta/1588230740/table/8be1cb626dfc4bcfaa0dec5219cef081, entries=8, sequenceid=31, filesize=5.2 K 2023-07-16 18:15:56,302 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~5.89 KB/6036, heapSize ~11.05 KB/11312, currentSize=0 B/0 for 1588230740 in 113ms, sequenceid=31, compaction requested=false 2023-07-16 18:15:56,302 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsTableSourceImpl(130): Creating new MetricsTableSourceImpl for table 'hbase:meta' 2023-07-16 18:15:56,312 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:40765/user/jenkins/test-data/9891f0a6-513d-b33f-4820-41cbcf26319e/data/hbase/meta/1588230740/recovered.edits/34.seqid, newMaxSeqId=34, maxSeqId=1 2023-07-16 18:15:56,313 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-16 18:15:56,313 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-16 18:15:56,313 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-16 18:15:56,313 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-16 18:15:56,318 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: MemstoreFlusherChore was stopped 2023-07-16 18:15:56,318 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: CompactionChecker was stopped 2023-07-16 18:15:56,370 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: CompactionChecker was stopped 2023-07-16 18:15:56,370 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: MemstoreFlusherChore was stopped 2023-07-16 18:15:56,387 INFO [RS:0;jenkins-hbase4:39753] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,39753,1689531352528; all regions closed. 2023-07-16 18:15:56,387 DEBUG [RS:0;jenkins-hbase4:39753] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-07-16 18:15:56,388 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: CompactionChecker was stopped 2023-07-16 18:15:56,388 INFO [regionserver/jenkins-hbase4:0.Chore.1] hbase.ScheduledChore(146): Chore: MemstoreFlusherChore was stopped 2023-07-16 18:15:56,389 INFO [RS:1;jenkins-hbase4:36093] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,36093,1689531352707; all regions closed. 2023-07-16 18:15:56,389 DEBUG [RS:1;jenkins-hbase4:36093] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-07-16 18:15:56,390 INFO [RS:2;jenkins-hbase4:34637] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,34637,1689531352868; all regions closed. 2023-07-16 18:15:56,390 DEBUG [RS:2;jenkins-hbase4:34637] quotas.QuotaCache(100): Stopping QuotaRefresherChore chore. 2023-07-16 18:15:56,401 DEBUG [RS:0;jenkins-hbase4:39753] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/9891f0a6-513d-b33f-4820-41cbcf26319e/oldWALs 2023-07-16 18:15:56,401 INFO [RS:0;jenkins-hbase4:39753] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C39753%2C1689531352528:(num 1689531353440) 2023-07-16 18:15:56,401 DEBUG [RS:0;jenkins-hbase4:39753] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-16 18:15:56,401 INFO [RS:0;jenkins-hbase4:39753] regionserver.LeaseManager(133): Closed leases 2023-07-16 18:15:56,403 INFO [RS:0;jenkins-hbase4:39753] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-16 18:15:56,403 INFO [RS:0;jenkins-hbase4:39753] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-16 18:15:56,403 INFO [RS:0;jenkins-hbase4:39753] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-16 18:15:56,403 INFO [RS:0;jenkins-hbase4:39753] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-16 18:15:56,403 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-16 18:15:56,409 INFO [RS:0;jenkins-hbase4:39753] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:39753 2023-07-16 18:15:56,411 DEBUG [Listener at localhost/33941-EventThread] zookeeper.ZKWatcher(600): regionserver:39753-0x1016f5907f20001, quorum=127.0.0.1:58951, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,39753,1689531352528 2023-07-16 18:15:56,411 DEBUG [Listener at localhost/33941-EventThread] zookeeper.ZKWatcher(600): regionserver:39753-0x1016f5907f20001, quorum=127.0.0.1:58951, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-16 18:15:56,411 DEBUG [Listener at localhost/33941-EventThread] zookeeper.ZKWatcher(600): regionserver:34637-0x1016f5907f20003, quorum=127.0.0.1:58951, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,39753,1689531352528 2023-07-16 18:15:56,411 DEBUG [Listener at localhost/33941-EventThread] zookeeper.ZKWatcher(600): regionserver:36093-0x1016f5907f20002, quorum=127.0.0.1:58951, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,39753,1689531352528 2023-07-16 18:15:56,411 DEBUG [Listener at localhost/33941-EventThread] zookeeper.ZKWatcher(600): regionserver:36093-0x1016f5907f20002, quorum=127.0.0.1:58951, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-16 18:15:56,411 DEBUG [Listener at localhost/33941-EventThread] zookeeper.ZKWatcher(600): master:44131-0x1016f5907f20000, quorum=127.0.0.1:58951, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-16 18:15:56,411 DEBUG [Listener at localhost/33941-EventThread] zookeeper.ZKWatcher(600): regionserver:34637-0x1016f5907f20003, quorum=127.0.0.1:58951, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-16 18:15:56,413 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,39753,1689531352528] 2023-07-16 18:15:56,413 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,39753,1689531352528; numProcessing=1 2023-07-16 18:15:56,415 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,39753,1689531352528 already deleted, retry=false 2023-07-16 18:15:56,415 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,39753,1689531352528 expired; onlineServers=2 2023-07-16 18:15:56,418 DEBUG [RS:2;jenkins-hbase4:34637] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/9891f0a6-513d-b33f-4820-41cbcf26319e/oldWALs 2023-07-16 18:15:56,418 INFO [RS:2;jenkins-hbase4:34637] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C34637%2C1689531352868.meta:.meta(num 1689531353609) 2023-07-16 18:15:56,418 DEBUG [RS:1;jenkins-hbase4:36093] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/9891f0a6-513d-b33f-4820-41cbcf26319e/oldWALs 2023-07-16 18:15:56,418 INFO [RS:1;jenkins-hbase4:36093] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C36093%2C1689531352707:(num 1689531353444) 2023-07-16 18:15:56,418 DEBUG [RS:1;jenkins-hbase4:36093] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-16 18:15:56,418 INFO [RS:1;jenkins-hbase4:36093] regionserver.LeaseManager(133): Closed leases 2023-07-16 18:15:56,419 INFO [RS:1;jenkins-hbase4:36093] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-16 18:15:56,419 INFO [RS:1;jenkins-hbase4:36093] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-16 18:15:56,419 INFO [RS:1;jenkins-hbase4:36093] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-16 18:15:56,419 INFO [RS:1;jenkins-hbase4:36093] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-16 18:15:56,420 INFO [RS:1;jenkins-hbase4:36093] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:36093 2023-07-16 18:15:56,420 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-16 18:15:56,426 DEBUG [Listener at localhost/33941-EventThread] zookeeper.ZKWatcher(600): regionserver:34637-0x1016f5907f20003, quorum=127.0.0.1:58951, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,36093,1689531352707 2023-07-16 18:15:56,426 DEBUG [Listener at localhost/33941-EventThread] zookeeper.ZKWatcher(600): regionserver:36093-0x1016f5907f20002, quorum=127.0.0.1:58951, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,36093,1689531352707 2023-07-16 18:15:56,429 DEBUG [Listener at localhost/33941-EventThread] zookeeper.ZKWatcher(600): master:44131-0x1016f5907f20000, quorum=127.0.0.1:58951, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-16 18:15:56,431 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,36093,1689531352707] 2023-07-16 18:15:56,431 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,36093,1689531352707; numProcessing=2 2023-07-16 18:15:56,432 DEBUG [RS:2;jenkins-hbase4:34637] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/9891f0a6-513d-b33f-4820-41cbcf26319e/oldWALs 2023-07-16 18:15:56,432 INFO [RS:2;jenkins-hbase4:34637] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C34637%2C1689531352868:(num 1689531353461) 2023-07-16 18:15:56,432 DEBUG [RS:2;jenkins-hbase4:34637] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-16 18:15:56,432 INFO [RS:2;jenkins-hbase4:34637] regionserver.LeaseManager(133): Closed leases 2023-07-16 18:15:56,432 INFO [RS:2;jenkins-hbase4:34637] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-16 18:15:56,433 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-16 18:15:56,433 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,36093,1689531352707 already deleted, retry=false 2023-07-16 18:15:56,433 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,36093,1689531352707 expired; onlineServers=1 2023-07-16 18:15:56,434 INFO [RS:2;jenkins-hbase4:34637] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:34637 2023-07-16 18:15:56,438 DEBUG [Listener at localhost/33941-EventThread] zookeeper.ZKWatcher(600): master:44131-0x1016f5907f20000, quorum=127.0.0.1:58951, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-16 18:15:56,438 DEBUG [Listener at localhost/33941-EventThread] zookeeper.ZKWatcher(600): regionserver:34637-0x1016f5907f20003, quorum=127.0.0.1:58951, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,34637,1689531352868 2023-07-16 18:15:56,439 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,34637,1689531352868] 2023-07-16 18:15:56,439 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,34637,1689531352868; numProcessing=3 2023-07-16 18:15:56,441 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,34637,1689531352868 already deleted, retry=false 2023-07-16 18:15:56,441 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,34637,1689531352868 expired; onlineServers=0 2023-07-16 18:15:56,441 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,44131,1689531352332' ***** 2023-07-16 18:15:56,441 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-16 18:15:56,442 DEBUG [M:0;jenkins-hbase4:44131] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@77460772, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-16 18:15:56,442 INFO [M:0;jenkins-hbase4:44131] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-16 18:15:56,444 INFO [M:0;jenkins-hbase4:44131] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@2e164a3a{master,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-16 18:15:56,445 INFO [M:0;jenkins-hbase4:44131] server.AbstractConnector(383): Stopped ServerConnector@78016569{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-16 18:15:56,445 DEBUG [Listener at localhost/33941-EventThread] zookeeper.ZKWatcher(600): master:44131-0x1016f5907f20000, quorum=127.0.0.1:58951, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-16 18:15:56,445 INFO [M:0;jenkins-hbase4:44131] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-16 18:15:56,445 DEBUG [Listener at localhost/33941-EventThread] zookeeper.ZKWatcher(600): master:44131-0x1016f5907f20000, quorum=127.0.0.1:58951, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-16 18:15:56,445 INFO [M:0;jenkins-hbase4:44131] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@2e75a497{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-16 18:15:56,445 INFO [M:0;jenkins-hbase4:44131] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@780bfb43{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8ccd5c03-7dd0-0741-9334-1f163324ab0d/hadoop.log.dir/,STOPPED} 2023-07-16 18:15:56,446 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:44131-0x1016f5907f20000, quorum=127.0.0.1:58951, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-16 18:15:56,446 INFO [M:0;jenkins-hbase4:44131] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,44131,1689531352332 2023-07-16 18:15:56,446 INFO [M:0;jenkins-hbase4:44131] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,44131,1689531352332; all regions closed. 2023-07-16 18:15:56,446 DEBUG [M:0;jenkins-hbase4:44131] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-16 18:15:56,446 INFO [M:0;jenkins-hbase4:44131] master.HMaster(1491): Stopping master jetty server 2023-07-16 18:15:56,447 INFO [M:0;jenkins-hbase4:44131] server.AbstractConnector(383): Stopped ServerConnector@d6d2719{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-16 18:15:56,448 DEBUG [M:0;jenkins-hbase4:44131] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-16 18:15:56,448 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-16 18:15:56,448 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689531353202] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689531353202,5,FailOnTimeoutGroup] 2023-07-16 18:15:56,448 DEBUG [M:0;jenkins-hbase4:44131] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-16 18:15:56,448 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689531353202] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689531353202,5,FailOnTimeoutGroup] 2023-07-16 18:15:56,449 INFO [M:0;jenkins-hbase4:44131] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-16 18:15:56,450 INFO [M:0;jenkins-hbase4:44131] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-16 18:15:56,450 INFO [M:0;jenkins-hbase4:44131] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [ScheduledChore name=QuotaObserverChore, period=60000, unit=MILLISECONDS] on shutdown 2023-07-16 18:15:56,450 DEBUG [M:0;jenkins-hbase4:44131] master.HMaster(1512): Stopping service threads 2023-07-16 18:15:56,450 INFO [M:0;jenkins-hbase4:44131] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-16 18:15:56,451 ERROR [M:0;jenkins-hbase4:44131] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-07-16 18:15:56,451 INFO [M:0;jenkins-hbase4:44131] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-16 18:15:56,451 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-16 18:15:56,452 DEBUG [M:0;jenkins-hbase4:44131] zookeeper.ZKUtil(398): master:44131-0x1016f5907f20000, quorum=127.0.0.1:58951, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-07-16 18:15:56,452 WARN [M:0;jenkins-hbase4:44131] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-07-16 18:15:56,452 INFO [M:0;jenkins-hbase4:44131] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-16 18:15:56,452 INFO [M:0;jenkins-hbase4:44131] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-16 18:15:56,453 DEBUG [M:0;jenkins-hbase4:44131] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-16 18:15:56,453 INFO [M:0;jenkins-hbase4:44131] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-16 18:15:56,453 DEBUG [M:0;jenkins-hbase4:44131] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-16 18:15:56,453 DEBUG [M:0;jenkins-hbase4:44131] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-16 18:15:56,453 DEBUG [M:0;jenkins-hbase4:44131] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-16 18:15:56,453 INFO [M:0;jenkins-hbase4:44131] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=93.00 KB heapSize=109.15 KB 2023-07-16 18:15:56,472 INFO [M:0;jenkins-hbase4:44131] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=93.00 KB at sequenceid=194 (bloomFilter=true), to=hdfs://localhost:40765/user/jenkins/test-data/9891f0a6-513d-b33f-4820-41cbcf26319e/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/dcd758190e914551847ec0a4ee160ffd 2023-07-16 18:15:56,478 DEBUG [M:0;jenkins-hbase4:44131] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:40765/user/jenkins/test-data/9891f0a6-513d-b33f-4820-41cbcf26319e/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/dcd758190e914551847ec0a4ee160ffd as hdfs://localhost:40765/user/jenkins/test-data/9891f0a6-513d-b33f-4820-41cbcf26319e/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/dcd758190e914551847ec0a4ee160ffd 2023-07-16 18:15:56,484 INFO [M:0;jenkins-hbase4:44131] regionserver.HStore(1080): Added hdfs://localhost:40765/user/jenkins/test-data/9891f0a6-513d-b33f-4820-41cbcf26319e/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/dcd758190e914551847ec0a4ee160ffd, entries=24, sequenceid=194, filesize=12.4 K 2023-07-16 18:15:56,484 INFO [M:0;jenkins-hbase4:44131] regionserver.HRegion(2948): Finished flush of dataSize ~93.00 KB/95237, heapSize ~109.13 KB/111752, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 31ms, sequenceid=194, compaction requested=false 2023-07-16 18:15:56,486 INFO [M:0;jenkins-hbase4:44131] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-16 18:15:56,486 DEBUG [M:0;jenkins-hbase4:44131] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-16 18:15:56,499 INFO [M:0;jenkins-hbase4:44131] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-16 18:15:56,499 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-16 18:15:56,500 INFO [M:0;jenkins-hbase4:44131] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:44131 2023-07-16 18:15:56,502 DEBUG [M:0;jenkins-hbase4:44131] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,44131,1689531352332 already deleted, retry=false 2023-07-16 18:15:56,757 DEBUG [Listener at localhost/33941-EventThread] zookeeper.ZKWatcher(600): master:44131-0x1016f5907f20000, quorum=127.0.0.1:58951, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-16 18:15:56,757 INFO [M:0;jenkins-hbase4:44131] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,44131,1689531352332; zookeeper connection closed. 2023-07-16 18:15:56,757 DEBUG [Listener at localhost/33941-EventThread] zookeeper.ZKWatcher(600): master:44131-0x1016f5907f20000, quorum=127.0.0.1:58951, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-16 18:15:56,858 DEBUG [Listener at localhost/33941-EventThread] zookeeper.ZKWatcher(600): regionserver:34637-0x1016f5907f20003, quorum=127.0.0.1:58951, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-16 18:15:56,858 INFO [RS:2;jenkins-hbase4:34637] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,34637,1689531352868; zookeeper connection closed. 2023-07-16 18:15:56,858 DEBUG [Listener at localhost/33941-EventThread] zookeeper.ZKWatcher(600): regionserver:34637-0x1016f5907f20003, quorum=127.0.0.1:58951, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-16 18:15:56,859 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@628e274e] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@628e274e 2023-07-16 18:15:56,958 DEBUG [Listener at localhost/33941-EventThread] zookeeper.ZKWatcher(600): regionserver:36093-0x1016f5907f20002, quorum=127.0.0.1:58951, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-16 18:15:56,958 INFO [RS:1;jenkins-hbase4:36093] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,36093,1689531352707; zookeeper connection closed. 2023-07-16 18:15:56,958 DEBUG [Listener at localhost/33941-EventThread] zookeeper.ZKWatcher(600): regionserver:36093-0x1016f5907f20002, quorum=127.0.0.1:58951, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-16 18:15:56,959 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@7111e44] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@7111e44 2023-07-16 18:15:57,058 DEBUG [Listener at localhost/33941-EventThread] zookeeper.ZKWatcher(600): regionserver:39753-0x1016f5907f20001, quorum=127.0.0.1:58951, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-16 18:15:57,058 INFO [RS:0;jenkins-hbase4:39753] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,39753,1689531352528; zookeeper connection closed. 2023-07-16 18:15:57,058 DEBUG [Listener at localhost/33941-EventThread] zookeeper.ZKWatcher(600): regionserver:39753-0x1016f5907f20001, quorum=127.0.0.1:58951, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-16 18:15:57,059 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@2826ae11] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@2826ae11 2023-07-16 18:15:57,059 INFO [Listener at localhost/33941] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 3 regionserver(s) complete 2023-07-16 18:15:57,059 WARN [Listener at localhost/33941] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-16 18:15:57,063 INFO [Listener at localhost/33941] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-16 18:15:57,169 WARN [BP-1075770033-172.31.14.131-1689531351413 heartbeating to localhost/127.0.0.1:40765] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-16 18:15:57,169 WARN [BP-1075770033-172.31.14.131-1689531351413 heartbeating to localhost/127.0.0.1:40765] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1075770033-172.31.14.131-1689531351413 (Datanode Uuid 144f6767-fb73-4624-be56-8470e5f45b15) service to localhost/127.0.0.1:40765 2023-07-16 18:15:57,170 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8ccd5c03-7dd0-0741-9334-1f163324ab0d/cluster_3f1a6989-300d-fb2d-a4a0-f96584988f7d/dfs/data/data5/current/BP-1075770033-172.31.14.131-1689531351413] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-16 18:15:57,170 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8ccd5c03-7dd0-0741-9334-1f163324ab0d/cluster_3f1a6989-300d-fb2d-a4a0-f96584988f7d/dfs/data/data6/current/BP-1075770033-172.31.14.131-1689531351413] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-16 18:15:57,173 WARN [Listener at localhost/33941] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-16 18:15:57,176 INFO [Listener at localhost/33941] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-16 18:15:57,282 WARN [BP-1075770033-172.31.14.131-1689531351413 heartbeating to localhost/127.0.0.1:40765] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-16 18:15:57,283 WARN [BP-1075770033-172.31.14.131-1689531351413 heartbeating to localhost/127.0.0.1:40765] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1075770033-172.31.14.131-1689531351413 (Datanode Uuid 72410e25-76b2-421c-bc45-56632eaac1de) service to localhost/127.0.0.1:40765 2023-07-16 18:15:57,289 WARN [Listener at localhost/33941] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-16 18:15:57,294 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8ccd5c03-7dd0-0741-9334-1f163324ab0d/cluster_3f1a6989-300d-fb2d-a4a0-f96584988f7d/dfs/data/data3/current/BP-1075770033-172.31.14.131-1689531351413] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-16 18:15:57,294 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8ccd5c03-7dd0-0741-9334-1f163324ab0d/cluster_3f1a6989-300d-fb2d-a4a0-f96584988f7d/dfs/data/data4/current/BP-1075770033-172.31.14.131-1689531351413] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-16 18:15:57,300 INFO [Listener at localhost/33941] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-16 18:15:57,301 WARN [HBase-Metrics2-1] impl.MetricsConfig(128): Cannot locate configuration: tried hadoop-metrics2-datanode.properties,hadoop-metrics2.properties 2023-07-16 18:15:57,402 WARN [BP-1075770033-172.31.14.131-1689531351413 heartbeating to localhost/127.0.0.1:40765] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-16 18:15:57,403 WARN [BP-1075770033-172.31.14.131-1689531351413 heartbeating to localhost/127.0.0.1:40765] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-1075770033-172.31.14.131-1689531351413 (Datanode Uuid ee7e8b8d-e553-4339-83a7-e7e361bf8bba) service to localhost/127.0.0.1:40765 2023-07-16 18:15:57,403 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8ccd5c03-7dd0-0741-9334-1f163324ab0d/cluster_3f1a6989-300d-fb2d-a4a0-f96584988f7d/dfs/data/data1/current/BP-1075770033-172.31.14.131-1689531351413] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-16 18:15:57,404 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8ccd5c03-7dd0-0741-9334-1f163324ab0d/cluster_3f1a6989-300d-fb2d-a4a0-f96584988f7d/dfs/data/data2/current/BP-1075770033-172.31.14.131-1689531351413] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-16 18:15:57,417 INFO [Listener at localhost/33941] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-16 18:15:57,539 INFO [Listener at localhost/33941] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-07-16 18:15:57,567 INFO [Listener at localhost/33941] hbase.HBaseTestingUtility(1293): Minicluster is down 2023-07-16 18:15:57,567 INFO [Listener at localhost/33941] hbase.HBaseTestingUtility(1068): Starting up minicluster with option: StartMiniClusterOption{numMasters=1, masterClass=null, numRegionServers=3, rsPorts=, rsClass=null, numDataNodes=3, dataNodeHosts=null, numZkServers=1, createRootDir=false, createWALDir=false} 2023-07-16 18:15:57,567 INFO [Listener at localhost/33941] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.log.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8ccd5c03-7dd0-0741-9334-1f163324ab0d/hadoop.log.dir so I do NOT create it in target/test-data/5f6dd96a-5549-784d-879e-773f11a8ed4c 2023-07-16 18:15:57,567 INFO [Listener at localhost/33941] hbase.HBaseTestingUtility(445): System.getProperty("hadoop.tmp.dir") already set to: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/8ccd5c03-7dd0-0741-9334-1f163324ab0d/hadoop.tmp.dir so I do NOT create it in target/test-data/5f6dd96a-5549-784d-879e-773f11a8ed4c 2023-07-16 18:15:57,567 INFO [Listener at localhost/33941] hbase.HBaseZKTestingUtility(82): Created new mini-cluster data directory: /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5f6dd96a-5549-784d-879e-773f11a8ed4c/cluster_3cda5aa3-2ebb-c38f-dd35-e868f192707f, deleteOnExit=true 2023-07-16 18:15:57,567 INFO [Listener at localhost/33941] hbase.HBaseTestingUtility(1082): STARTING DFS 2023-07-16 18:15:57,568 INFO [Listener at localhost/33941] hbase.HBaseTestingUtility(772): Setting test.cache.data to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5f6dd96a-5549-784d-879e-773f11a8ed4c/test.cache.data in system properties and HBase conf 2023-07-16 18:15:57,568 INFO [Listener at localhost/33941] hbase.HBaseTestingUtility(772): Setting hadoop.tmp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5f6dd96a-5549-784d-879e-773f11a8ed4c/hadoop.tmp.dir in system properties and HBase conf 2023-07-16 18:15:57,568 INFO [Listener at localhost/33941] hbase.HBaseTestingUtility(772): Setting hadoop.log.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5f6dd96a-5549-784d-879e-773f11a8ed4c/hadoop.log.dir in system properties and HBase conf 2023-07-16 18:15:57,568 INFO [Listener at localhost/33941] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.local.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5f6dd96a-5549-784d-879e-773f11a8ed4c/mapreduce.cluster.local.dir in system properties and HBase conf 2023-07-16 18:15:57,568 INFO [Listener at localhost/33941] hbase.HBaseTestingUtility(772): Setting mapreduce.cluster.temp.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5f6dd96a-5549-784d-879e-773f11a8ed4c/mapreduce.cluster.temp.dir in system properties and HBase conf 2023-07-16 18:15:57,568 INFO [Listener at localhost/33941] hbase.HBaseTestingUtility(759): read short circuit is OFF 2023-07-16 18:15:57,568 DEBUG [Listener at localhost/33941] fs.HFileSystem(308): The file system is not a DistributedFileSystem. Skipping on block location reordering 2023-07-16 18:15:57,568 INFO [Listener at localhost/33941] hbase.HBaseTestingUtility(772): Setting yarn.node-labels.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5f6dd96a-5549-784d-879e-773f11a8ed4c/yarn.node-labels.fs-store.root-dir in system properties and HBase conf 2023-07-16 18:15:57,568 INFO [Listener at localhost/33941] hbase.HBaseTestingUtility(772): Setting yarn.node-attribute.fs-store.root-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5f6dd96a-5549-784d-879e-773f11a8ed4c/yarn.node-attribute.fs-store.root-dir in system properties and HBase conf 2023-07-16 18:15:57,569 INFO [Listener at localhost/33941] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.log-dirs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5f6dd96a-5549-784d-879e-773f11a8ed4c/yarn.nodemanager.log-dirs in system properties and HBase conf 2023-07-16 18:15:57,569 INFO [Listener at localhost/33941] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5f6dd96a-5549-784d-879e-773f11a8ed4c/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-16 18:15:57,569 INFO [Listener at localhost/33941] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.active-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5f6dd96a-5549-784d-879e-773f11a8ed4c/yarn.timeline-service.entity-group-fs-store.active-dir in system properties and HBase conf 2023-07-16 18:15:57,569 INFO [Listener at localhost/33941] hbase.HBaseTestingUtility(772): Setting yarn.timeline-service.entity-group-fs-store.done-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5f6dd96a-5549-784d-879e-773f11a8ed4c/yarn.timeline-service.entity-group-fs-store.done-dir in system properties and HBase conf 2023-07-16 18:15:57,569 INFO [Listener at localhost/33941] hbase.HBaseTestingUtility(772): Setting yarn.nodemanager.remote-app-log-dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5f6dd96a-5549-784d-879e-773f11a8ed4c/yarn.nodemanager.remote-app-log-dir in system properties and HBase conf 2023-07-16 18:15:57,569 INFO [Listener at localhost/33941] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5f6dd96a-5549-784d-879e-773f11a8ed4c/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-16 18:15:57,569 INFO [Listener at localhost/33941] hbase.HBaseTestingUtility(772): Setting dfs.datanode.shared.file.descriptor.paths to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5f6dd96a-5549-784d-879e-773f11a8ed4c/dfs.datanode.shared.file.descriptor.paths in system properties and HBase conf 2023-07-16 18:15:57,569 INFO [Listener at localhost/33941] hbase.HBaseTestingUtility(772): Setting nfs.dump.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5f6dd96a-5549-784d-879e-773f11a8ed4c/nfs.dump.dir in system properties and HBase conf 2023-07-16 18:15:57,569 INFO [Listener at localhost/33941] hbase.HBaseTestingUtility(772): Setting java.io.tmpdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5f6dd96a-5549-784d-879e-773f11a8ed4c/java.io.tmpdir in system properties and HBase conf 2023-07-16 18:15:57,569 INFO [Listener at localhost/33941] hbase.HBaseTestingUtility(772): Setting dfs.journalnode.edits.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5f6dd96a-5549-784d-879e-773f11a8ed4c/dfs.journalnode.edits.dir in system properties and HBase conf 2023-07-16 18:15:57,569 INFO [Listener at localhost/33941] hbase.HBaseTestingUtility(772): Setting dfs.provided.aliasmap.inmemory.leveldb.dir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5f6dd96a-5549-784d-879e-773f11a8ed4c/dfs.provided.aliasmap.inmemory.leveldb.dir in system properties and HBase conf 2023-07-16 18:15:57,570 INFO [Listener at localhost/33941] hbase.HBaseTestingUtility(772): Setting fs.s3a.committer.staging.tmp.path to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5f6dd96a-5549-784d-879e-773f11a8ed4c/fs.s3a.committer.staging.tmp.path in system properties and HBase conf Formatting using clusterid: testClusterID 2023-07-16 18:15:57,574 WARN [Listener at localhost/33941] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-16 18:15:57,574 WARN [Listener at localhost/33941] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-16 18:15:57,621 WARN [Listener at localhost/33941] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-16 18:15:57,623 INFO [Listener at localhost/33941] log.Slf4jLog(67): jetty-6.1.26 2023-07-16 18:15:57,629 INFO [Listener at localhost/33941] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/hdfs to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5f6dd96a-5549-784d-879e-773f11a8ed4c/java.io.tmpdir/Jetty_localhost_34935_hdfs____.dko0sc/webapp 2023-07-16 18:15:57,637 DEBUG [Listener at localhost/33941-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient-0x1016f5907f2000a, quorum=127.0.0.1:58951, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Disconnected, path=null 2023-07-16 18:15:57,637 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(630): VerifyingRSGroupAdminClient-0x1016f5907f2000a, quorum=127.0.0.1:58951, baseZNode=/hbase Received Disconnected from ZooKeeper, ignoring 2023-07-16 18:15:57,722 INFO [Listener at localhost/33941] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:34935 2023-07-16 18:15:57,727 WARN [Listener at localhost/33941] conf.Configuration(1701): No unit for dfs.heartbeat.interval(3) assuming SECONDS 2023-07-16 18:15:57,727 WARN [Listener at localhost/33941] conf.Configuration(1701): No unit for dfs.namenode.safemode.extension(0) assuming MILLISECONDS 2023-07-16 18:15:57,770 WARN [Listener at localhost/39679] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-16 18:15:57,827 WARN [Listener at localhost/39679] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-16 18:15:57,829 WARN [Listener at localhost/39679] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-16 18:15:57,830 INFO [Listener at localhost/39679] log.Slf4jLog(67): jetty-6.1.26 2023-07-16 18:15:57,836 INFO [Listener at localhost/39679] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5f6dd96a-5549-784d-879e-773f11a8ed4c/java.io.tmpdir/Jetty_localhost_37071_datanode____ac5j4d/webapp 2023-07-16 18:15:57,929 INFO [Listener at localhost/39679] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:37071 2023-07-16 18:15:57,936 WARN [Listener at localhost/35169] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-16 18:15:57,950 WARN [Listener at localhost/35169] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-16 18:15:57,952 WARN [Listener at localhost/35169] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-16 18:15:57,953 INFO [Listener at localhost/35169] log.Slf4jLog(67): jetty-6.1.26 2023-07-16 18:15:57,957 INFO [Listener at localhost/35169] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5f6dd96a-5549-784d-879e-773f11a8ed4c/java.io.tmpdir/Jetty_localhost_44103_datanode____.a9azl1/webapp 2023-07-16 18:15:58,036 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x76c0c89d43dfc19: Processing first storage report for DS-63b2dce7-ee25-4e15-8bc1-af885cb05c90 from datanode e8550b7d-3fad-4456-a753-a006c77d6029 2023-07-16 18:15:58,036 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x76c0c89d43dfc19: from storage DS-63b2dce7-ee25-4e15-8bc1-af885cb05c90 node DatanodeRegistration(127.0.0.1:32865, datanodeUuid=e8550b7d-3fad-4456-a753-a006c77d6029, infoPort=46613, infoSecurePort=0, ipcPort=35169, storageInfo=lv=-57;cid=testClusterID;nsid=351662858;c=1689531357576), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-16 18:15:58,036 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x76c0c89d43dfc19: Processing first storage report for DS-0251ce65-ef2e-4aeb-8505-7db493d146af from datanode e8550b7d-3fad-4456-a753-a006c77d6029 2023-07-16 18:15:58,036 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x76c0c89d43dfc19: from storage DS-0251ce65-ef2e-4aeb-8505-7db493d146af node DatanodeRegistration(127.0.0.1:32865, datanodeUuid=e8550b7d-3fad-4456-a753-a006c77d6029, infoPort=46613, infoSecurePort=0, ipcPort=35169, storageInfo=lv=-57;cid=testClusterID;nsid=351662858;c=1689531357576), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-16 18:15:58,058 INFO [Listener at localhost/35169] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:44103 2023-07-16 18:15:58,067 WARN [Listener at localhost/33191] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-16 18:15:58,084 WARN [Listener at localhost/33191] conf.Configuration(1701): No unit for dfs.datanode.outliers.report.interval(1800000) assuming MILLISECONDS 2023-07-16 18:15:58,086 WARN [Listener at localhost/33191] http.HttpRequestLog(97): Jetty request log can only be enabled using Log4j 2023-07-16 18:15:58,087 INFO [Listener at localhost/33191] log.Slf4jLog(67): jetty-6.1.26 2023-07-16 18:15:58,090 INFO [Listener at localhost/33191] log.Slf4jLog(67): Extract jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/local-repository/org/apache/hadoop/hadoop-hdfs/2.10.0/hadoop-hdfs-2.10.0-tests.jar!/webapps/datanode to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5f6dd96a-5549-784d-879e-773f11a8ed4c/java.io.tmpdir/Jetty_localhost_35165_datanode____.tx84cr/webapp 2023-07-16 18:15:58,171 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x91931a4048d9fadb: Processing first storage report for DS-e508de1e-751d-4f9c-86b1-7c85540a99e8 from datanode c8c57d3f-0034-4295-802d-2a2c3d155fc7 2023-07-16 18:15:58,171 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x91931a4048d9fadb: from storage DS-e508de1e-751d-4f9c-86b1-7c85540a99e8 node DatanodeRegistration(127.0.0.1:38727, datanodeUuid=c8c57d3f-0034-4295-802d-2a2c3d155fc7, infoPort=45947, infoSecurePort=0, ipcPort=33191, storageInfo=lv=-57;cid=testClusterID;nsid=351662858;c=1689531357576), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-16 18:15:58,171 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0x91931a4048d9fadb: Processing first storage report for DS-daf1fa2b-5efd-44a5-8e69-a313e15b8b4b from datanode c8c57d3f-0034-4295-802d-2a2c3d155fc7 2023-07-16 18:15:58,171 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0x91931a4048d9fadb: from storage DS-daf1fa2b-5efd-44a5-8e69-a313e15b8b4b node DatanodeRegistration(127.0.0.1:38727, datanodeUuid=c8c57d3f-0034-4295-802d-2a2c3d155fc7, infoPort=45947, infoSecurePort=0, ipcPort=33191, storageInfo=lv=-57;cid=testClusterID;nsid=351662858;c=1689531357576), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-16 18:15:58,189 INFO [Listener at localhost/33191] log.Slf4jLog(67): Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:35165 2023-07-16 18:15:58,195 WARN [Listener at localhost/42859] common.MetricsLoggerTask(153): Metrics logging will not be async since the logger is not log4j 2023-07-16 18:15:58,287 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xc9c19dacf76335e1: Processing first storage report for DS-772c744a-be2a-46db-aa0e-2db3a2bde8ed from datanode 493e577e-7922-4974-8b72-ea296b4cae66 2023-07-16 18:15:58,287 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xc9c19dacf76335e1: from storage DS-772c744a-be2a-46db-aa0e-2db3a2bde8ed node DatanodeRegistration(127.0.0.1:41671, datanodeUuid=493e577e-7922-4974-8b72-ea296b4cae66, infoPort=38605, infoSecurePort=0, ipcPort=42859, storageInfo=lv=-57;cid=testClusterID;nsid=351662858;c=1689531357576), blocks: 0, hasStaleStorage: true, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-16 18:15:58,287 INFO [Block report processor] blockmanagement.BlockManager(2202): BLOCK* processReport 0xc9c19dacf76335e1: Processing first storage report for DS-7fdc5297-a457-41cc-af25-516fadf92ce0 from datanode 493e577e-7922-4974-8b72-ea296b4cae66 2023-07-16 18:15:58,287 INFO [Block report processor] blockmanagement.BlockManager(2228): BLOCK* processReport 0xc9c19dacf76335e1: from storage DS-7fdc5297-a457-41cc-af25-516fadf92ce0 node DatanodeRegistration(127.0.0.1:41671, datanodeUuid=493e577e-7922-4974-8b72-ea296b4cae66, infoPort=38605, infoSecurePort=0, ipcPort=42859, storageInfo=lv=-57;cid=testClusterID;nsid=351662858;c=1689531357576), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2023-07-16 18:15:58,306 DEBUG [Listener at localhost/42859] hbase.HBaseTestingUtility(649): Setting hbase.rootdir to /home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5f6dd96a-5549-784d-879e-773f11a8ed4c 2023-07-16 18:15:58,308 INFO [Listener at localhost/42859] zookeeper.MiniZooKeeperCluster(258): Started connectionTimeout=30000, dir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5f6dd96a-5549-784d-879e-773f11a8ed4c/cluster_3cda5aa3-2ebb-c38f-dd35-e868f192707f/zookeeper_0, clientPort=54881, secureClientPort=-1, dataDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5f6dd96a-5549-784d-879e-773f11a8ed4c/cluster_3cda5aa3-2ebb-c38f-dd35-e868f192707f/zookeeper_0/version-2, dataDirSize=424 dataLogDir=/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5f6dd96a-5549-784d-879e-773f11a8ed4c/cluster_3cda5aa3-2ebb-c38f-dd35-e868f192707f/zookeeper_0/version-2, dataLogSize=424 tickTime=2000, maxClientCnxns=300, minSessionTimeout=4000, maxSessionTimeout=40000, serverId=0 2023-07-16 18:15:58,309 INFO [Listener at localhost/42859] zookeeper.MiniZooKeeperCluster(283): Started MiniZooKeeperCluster and ran 'stat' on client port=54881 2023-07-16 18:15:58,309 INFO [Listener at localhost/42859] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-16 18:15:58,310 INFO [Listener at localhost/42859] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-16 18:15:58,329 INFO [Listener at localhost/42859] util.FSUtils(471): Created version file at hdfs://localhost:39679/user/jenkins/test-data/46b660d4-bb55-7072-2fb4-691f81782058 with version=8 2023-07-16 18:15:58,329 INFO [Listener at localhost/42859] hbase.HBaseTestingUtility(1408): The hbase.fs.tmp.dir is set to hdfs://localhost:36523/user/jenkins/test-data/727a911f-c727-6780-c121-4ca3bbd9ae92/hbase-staging 2023-07-16 18:15:58,330 DEBUG [Listener at localhost/42859] hbase.LocalHBaseCluster(134): Setting Master Port to random. 2023-07-16 18:15:58,330 DEBUG [Listener at localhost/42859] hbase.LocalHBaseCluster(141): Setting RegionServer Port to random. 2023-07-16 18:15:58,330 DEBUG [Listener at localhost/42859] hbase.LocalHBaseCluster(151): Setting RS InfoServer Port to random. 2023-07-16 18:15:58,330 DEBUG [Listener at localhost/42859] hbase.LocalHBaseCluster(159): Setting Master InfoServer Port to random. 2023-07-16 18:15:58,331 INFO [Listener at localhost/42859] client.ConnectionUtils(127): master/jenkins-hbase4:0 server-side Connection retries=45 2023-07-16 18:15:58,331 INFO [Listener at localhost/42859] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-16 18:15:58,332 INFO [Listener at localhost/42859] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-16 18:15:58,332 INFO [Listener at localhost/42859] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-16 18:15:58,332 INFO [Listener at localhost/42859] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-16 18:15:58,332 INFO [Listener at localhost/42859] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-16 18:15:58,332 INFO [Listener at localhost/42859] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.MasterService, hbase.pb.RegionServerStatusService, hbase.pb.LockService, hbase.pb.HbckService, hbase.pb.ClientMetaService, hbase.pb.ClientService, hbase.pb.AdminService 2023-07-16 18:15:58,333 INFO [Listener at localhost/42859] ipc.NettyRpcServer(120): Bind to /172.31.14.131:32899 2023-07-16 18:15:58,334 INFO [Listener at localhost/42859] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-16 18:15:58,335 INFO [Listener at localhost/42859] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-16 18:15:58,335 INFO [Listener at localhost/42859] zookeeper.RecoverableZooKeeper(93): Process identifier=master:32899 connecting to ZooKeeper ensemble=127.0.0.1:54881 2023-07-16 18:15:58,347 DEBUG [Listener at localhost/42859-EventThread] zookeeper.ZKWatcher(600): master:328990x0, quorum=127.0.0.1:54881, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-16 18:15:58,348 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): master:32899-0x1016f591f660000 connected 2023-07-16 18:15:58,364 DEBUG [Listener at localhost/42859] zookeeper.ZKUtil(164): master:32899-0x1016f591f660000, quorum=127.0.0.1:54881, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-16 18:15:58,365 DEBUG [Listener at localhost/42859] zookeeper.ZKUtil(164): master:32899-0x1016f591f660000, quorum=127.0.0.1:54881, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-16 18:15:58,365 DEBUG [Listener at localhost/42859] zookeeper.ZKUtil(164): master:32899-0x1016f591f660000, quorum=127.0.0.1:54881, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-16 18:15:58,367 DEBUG [Listener at localhost/42859] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=32899 2023-07-16 18:15:58,367 DEBUG [Listener at localhost/42859] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=32899 2023-07-16 18:15:58,367 DEBUG [Listener at localhost/42859] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=32899 2023-07-16 18:15:58,368 DEBUG [Listener at localhost/42859] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=32899 2023-07-16 18:15:58,370 DEBUG [Listener at localhost/42859] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=32899 2023-07-16 18:15:58,372 INFO [Listener at localhost/42859] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-16 18:15:58,372 INFO [Listener at localhost/42859] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-16 18:15:58,372 INFO [Listener at localhost/42859] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-16 18:15:58,373 INFO [Listener at localhost/42859] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context master 2023-07-16 18:15:58,373 INFO [Listener at localhost/42859] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-16 18:15:58,373 INFO [Listener at localhost/42859] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-16 18:15:58,373 INFO [Listener at localhost/42859] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-16 18:15:58,374 INFO [Listener at localhost/42859] http.HttpServer(1146): Jetty bound to port 39933 2023-07-16 18:15:58,374 INFO [Listener at localhost/42859] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-16 18:15:58,379 INFO [Listener at localhost/42859] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 18:15:58,379 INFO [Listener at localhost/42859] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@2b2fc95d{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5f6dd96a-5549-784d-879e-773f11a8ed4c/hadoop.log.dir/,AVAILABLE} 2023-07-16 18:15:58,380 INFO [Listener at localhost/42859] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 18:15:58,380 INFO [Listener at localhost/42859] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@597d44c7{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-16 18:15:58,502 INFO [Listener at localhost/42859] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-16 18:15:58,503 INFO [Listener at localhost/42859] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-16 18:15:58,503 INFO [Listener at localhost/42859] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-16 18:15:58,503 INFO [Listener at localhost/42859] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-16 18:15:58,504 INFO [Listener at localhost/42859] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 18:15:58,505 INFO [Listener at localhost/42859] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@4afa3b8f{master,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5f6dd96a-5549-784d-879e-773f11a8ed4c/java.io.tmpdir/jetty-0_0_0_0-39933-hbase-server-2_4_18-SNAPSHOT_jar-_-any-5037530391928936859/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-16 18:15:58,506 INFO [Listener at localhost/42859] server.AbstractConnector(333): Started ServerConnector@421485aa{HTTP/1.1, (http/1.1)}{0.0.0.0:39933} 2023-07-16 18:15:58,506 INFO [Listener at localhost/42859] server.Server(415): Started @43286ms 2023-07-16 18:15:58,507 INFO [Listener at localhost/42859] master.HMaster(444): hbase.rootdir=hdfs://localhost:39679/user/jenkins/test-data/46b660d4-bb55-7072-2fb4-691f81782058, hbase.cluster.distributed=false 2023-07-16 18:15:58,519 INFO [Listener at localhost/42859] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-16 18:15:58,520 INFO [Listener at localhost/42859] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-16 18:15:58,520 INFO [Listener at localhost/42859] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-16 18:15:58,520 INFO [Listener at localhost/42859] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-16 18:15:58,520 INFO [Listener at localhost/42859] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-16 18:15:58,520 INFO [Listener at localhost/42859] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-16 18:15:58,520 INFO [Listener at localhost/42859] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-16 18:15:58,521 INFO [Listener at localhost/42859] ipc.NettyRpcServer(120): Bind to /172.31.14.131:43051 2023-07-16 18:15:58,521 INFO [Listener at localhost/42859] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-16 18:15:58,522 DEBUG [Listener at localhost/42859] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-16 18:15:58,522 INFO [Listener at localhost/42859] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-16 18:15:58,523 INFO [Listener at localhost/42859] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-16 18:15:58,524 INFO [Listener at localhost/42859] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:43051 connecting to ZooKeeper ensemble=127.0.0.1:54881 2023-07-16 18:15:58,527 DEBUG [Listener at localhost/42859-EventThread] zookeeper.ZKWatcher(600): regionserver:430510x0, quorum=127.0.0.1:54881, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-16 18:15:58,529 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:43051-0x1016f591f660001 connected 2023-07-16 18:15:58,529 DEBUG [Listener at localhost/42859] zookeeper.ZKUtil(164): regionserver:43051-0x1016f591f660001, quorum=127.0.0.1:54881, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-16 18:15:58,530 DEBUG [Listener at localhost/42859] zookeeper.ZKUtil(164): regionserver:43051-0x1016f591f660001, quorum=127.0.0.1:54881, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-16 18:15:58,530 DEBUG [Listener at localhost/42859] zookeeper.ZKUtil(164): regionserver:43051-0x1016f591f660001, quorum=127.0.0.1:54881, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-16 18:15:58,530 DEBUG [Listener at localhost/42859] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=43051 2023-07-16 18:15:58,531 DEBUG [Listener at localhost/42859] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=43051 2023-07-16 18:15:58,533 DEBUG [Listener at localhost/42859] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=43051 2023-07-16 18:15:58,533 DEBUG [Listener at localhost/42859] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=43051 2023-07-16 18:15:58,533 DEBUG [Listener at localhost/42859] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=43051 2023-07-16 18:15:58,535 INFO [Listener at localhost/42859] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-16 18:15:58,535 INFO [Listener at localhost/42859] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-16 18:15:58,535 INFO [Listener at localhost/42859] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-16 18:15:58,536 INFO [Listener at localhost/42859] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-16 18:15:58,536 INFO [Listener at localhost/42859] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-16 18:15:58,536 INFO [Listener at localhost/42859] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-16 18:15:58,536 INFO [Listener at localhost/42859] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-16 18:15:58,537 INFO [Listener at localhost/42859] http.HttpServer(1146): Jetty bound to port 39345 2023-07-16 18:15:58,537 INFO [Listener at localhost/42859] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-16 18:15:58,539 INFO [Listener at localhost/42859] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 18:15:58,539 INFO [Listener at localhost/42859] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@2a4c265e{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5f6dd96a-5549-784d-879e-773f11a8ed4c/hadoop.log.dir/,AVAILABLE} 2023-07-16 18:15:58,540 INFO [Listener at localhost/42859] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 18:15:58,540 INFO [Listener at localhost/42859] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@595b7975{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-16 18:15:58,654 INFO [Listener at localhost/42859] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-16 18:15:58,655 INFO [Listener at localhost/42859] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-16 18:15:58,655 INFO [Listener at localhost/42859] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-16 18:15:58,655 INFO [Listener at localhost/42859] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-16 18:15:58,656 INFO [Listener at localhost/42859] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 18:15:58,657 INFO [Listener at localhost/42859] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@736fc548{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5f6dd96a-5549-784d-879e-773f11a8ed4c/java.io.tmpdir/jetty-0_0_0_0-39345-hbase-server-2_4_18-SNAPSHOT_jar-_-any-7094328093552465941/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-16 18:15:58,658 INFO [Listener at localhost/42859] server.AbstractConnector(333): Started ServerConnector@5f6ec059{HTTP/1.1, (http/1.1)}{0.0.0.0:39345} 2023-07-16 18:15:58,659 INFO [Listener at localhost/42859] server.Server(415): Started @43438ms 2023-07-16 18:15:58,670 INFO [Listener at localhost/42859] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-16 18:15:58,670 INFO [Listener at localhost/42859] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-16 18:15:58,670 INFO [Listener at localhost/42859] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-16 18:15:58,670 INFO [Listener at localhost/42859] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-16 18:15:58,670 INFO [Listener at localhost/42859] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-16 18:15:58,670 INFO [Listener at localhost/42859] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-16 18:15:58,670 INFO [Listener at localhost/42859] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-16 18:15:58,671 INFO [Listener at localhost/42859] ipc.NettyRpcServer(120): Bind to /172.31.14.131:37401 2023-07-16 18:15:58,671 INFO [Listener at localhost/42859] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-16 18:15:58,673 DEBUG [Listener at localhost/42859] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-16 18:15:58,673 INFO [Listener at localhost/42859] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-16 18:15:58,674 INFO [Listener at localhost/42859] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-16 18:15:58,675 INFO [Listener at localhost/42859] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:37401 connecting to ZooKeeper ensemble=127.0.0.1:54881 2023-07-16 18:15:58,678 DEBUG [Listener at localhost/42859-EventThread] zookeeper.ZKWatcher(600): regionserver:374010x0, quorum=127.0.0.1:54881, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-16 18:15:58,680 DEBUG [Listener at localhost/42859] zookeeper.ZKUtil(164): regionserver:374010x0, quorum=127.0.0.1:54881, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-16 18:15:58,680 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:37401-0x1016f591f660002 connected 2023-07-16 18:15:58,680 DEBUG [Listener at localhost/42859] zookeeper.ZKUtil(164): regionserver:37401-0x1016f591f660002, quorum=127.0.0.1:54881, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-16 18:15:58,681 DEBUG [Listener at localhost/42859] zookeeper.ZKUtil(164): regionserver:37401-0x1016f591f660002, quorum=127.0.0.1:54881, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-16 18:15:58,681 DEBUG [Listener at localhost/42859] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=37401 2023-07-16 18:15:58,681 DEBUG [Listener at localhost/42859] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=37401 2023-07-16 18:15:58,681 DEBUG [Listener at localhost/42859] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=37401 2023-07-16 18:15:58,682 DEBUG [Listener at localhost/42859] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=37401 2023-07-16 18:15:58,683 DEBUG [Listener at localhost/42859] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=37401 2023-07-16 18:15:58,684 INFO [Listener at localhost/42859] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-16 18:15:58,684 INFO [Listener at localhost/42859] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-16 18:15:58,684 INFO [Listener at localhost/42859] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-16 18:15:58,685 INFO [Listener at localhost/42859] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-16 18:15:58,685 INFO [Listener at localhost/42859] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-16 18:15:58,685 INFO [Listener at localhost/42859] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-16 18:15:58,685 INFO [Listener at localhost/42859] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-16 18:15:58,685 INFO [Listener at localhost/42859] http.HttpServer(1146): Jetty bound to port 40401 2023-07-16 18:15:58,686 INFO [Listener at localhost/42859] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-16 18:15:58,689 INFO [Listener at localhost/42859] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 18:15:58,689 INFO [Listener at localhost/42859] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@6e117f46{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5f6dd96a-5549-784d-879e-773f11a8ed4c/hadoop.log.dir/,AVAILABLE} 2023-07-16 18:15:58,689 INFO [Listener at localhost/42859] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 18:15:58,690 INFO [Listener at localhost/42859] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@43f5f68f{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-16 18:15:58,801 INFO [Listener at localhost/42859] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-16 18:15:58,802 INFO [Listener at localhost/42859] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-16 18:15:58,802 INFO [Listener at localhost/42859] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-16 18:15:58,802 INFO [Listener at localhost/42859] session.HouseKeeper(132): node0 Scavenging every 660000ms 2023-07-16 18:15:58,803 INFO [Listener at localhost/42859] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 18:15:58,804 INFO [Listener at localhost/42859] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@7de77cdd{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5f6dd96a-5549-784d-879e-773f11a8ed4c/java.io.tmpdir/jetty-0_0_0_0-40401-hbase-server-2_4_18-SNAPSHOT_jar-_-any-2114052836175924687/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-16 18:15:58,806 INFO [Listener at localhost/42859] server.AbstractConnector(333): Started ServerConnector@701fd4b6{HTTP/1.1, (http/1.1)}{0.0.0.0:40401} 2023-07-16 18:15:58,806 INFO [Listener at localhost/42859] server.Server(415): Started @43586ms 2023-07-16 18:15:58,818 INFO [Listener at localhost/42859] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-16 18:15:58,818 INFO [Listener at localhost/42859] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-16 18:15:58,818 INFO [Listener at localhost/42859] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-16 18:15:58,818 INFO [Listener at localhost/42859] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-16 18:15:58,818 INFO [Listener at localhost/42859] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-16 18:15:58,818 INFO [Listener at localhost/42859] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-16 18:15:58,818 INFO [Listener at localhost/42859] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-16 18:15:58,819 INFO [Listener at localhost/42859] ipc.NettyRpcServer(120): Bind to /172.31.14.131:35551 2023-07-16 18:15:58,819 INFO [Listener at localhost/42859] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-16 18:15:58,821 DEBUG [Listener at localhost/42859] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-16 18:15:58,821 INFO [Listener at localhost/42859] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-16 18:15:58,822 INFO [Listener at localhost/42859] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-16 18:15:58,823 INFO [Listener at localhost/42859] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:35551 connecting to ZooKeeper ensemble=127.0.0.1:54881 2023-07-16 18:15:58,826 DEBUG [Listener at localhost/42859-EventThread] zookeeper.ZKWatcher(600): regionserver:355510x0, quorum=127.0.0.1:54881, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-16 18:15:58,827 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:35551-0x1016f591f660003 connected 2023-07-16 18:15:58,827 DEBUG [Listener at localhost/42859] zookeeper.ZKUtil(164): regionserver:35551-0x1016f591f660003, quorum=127.0.0.1:54881, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-16 18:15:58,828 DEBUG [Listener at localhost/42859] zookeeper.ZKUtil(164): regionserver:35551-0x1016f591f660003, quorum=127.0.0.1:54881, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-16 18:15:58,828 DEBUG [Listener at localhost/42859] zookeeper.ZKUtil(164): regionserver:35551-0x1016f591f660003, quorum=127.0.0.1:54881, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-16 18:15:58,829 DEBUG [Listener at localhost/42859] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=35551 2023-07-16 18:15:58,829 DEBUG [Listener at localhost/42859] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=35551 2023-07-16 18:15:58,829 DEBUG [Listener at localhost/42859] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=35551 2023-07-16 18:15:58,830 DEBUG [Listener at localhost/42859] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=35551 2023-07-16 18:15:58,831 DEBUG [Listener at localhost/42859] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=35551 2023-07-16 18:15:58,833 INFO [Listener at localhost/42859] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-16 18:15:58,833 INFO [Listener at localhost/42859] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-16 18:15:58,833 INFO [Listener at localhost/42859] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-16 18:15:58,834 INFO [Listener at localhost/42859] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-16 18:15:58,834 INFO [Listener at localhost/42859] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-16 18:15:58,834 INFO [Listener at localhost/42859] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-16 18:15:58,834 INFO [Listener at localhost/42859] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-16 18:15:58,835 INFO [Listener at localhost/42859] http.HttpServer(1146): Jetty bound to port 35553 2023-07-16 18:15:58,835 INFO [Listener at localhost/42859] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-16 18:15:58,839 INFO [Listener at localhost/42859] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 18:15:58,839 INFO [Listener at localhost/42859] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@451ae1e1{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5f6dd96a-5549-784d-879e-773f11a8ed4c/hadoop.log.dir/,AVAILABLE} 2023-07-16 18:15:58,840 INFO [Listener at localhost/42859] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 18:15:58,840 INFO [Listener at localhost/42859] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@7d2ac34c{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-16 18:15:58,954 INFO [Listener at localhost/42859] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-16 18:15:58,954 INFO [Listener at localhost/42859] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-16 18:15:58,955 INFO [Listener at localhost/42859] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-16 18:15:58,955 INFO [Listener at localhost/42859] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-16 18:15:58,956 INFO [Listener at localhost/42859] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 18:15:58,956 INFO [Listener at localhost/42859] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@656282{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5f6dd96a-5549-784d-879e-773f11a8ed4c/java.io.tmpdir/jetty-0_0_0_0-35553-hbase-server-2_4_18-SNAPSHOT_jar-_-any-6483321203531094627/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-16 18:15:58,958 INFO [Listener at localhost/42859] server.AbstractConnector(333): Started ServerConnector@e9426d4{HTTP/1.1, (http/1.1)}{0.0.0.0:35553} 2023-07-16 18:15:58,958 INFO [Listener at localhost/42859] server.Server(415): Started @43737ms 2023-07-16 18:15:58,960 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-16 18:15:58,963 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.AbstractConnector(333): Started ServerConnector@3033a3f0{HTTP/1.1, (http/1.1)}{0.0.0.0:36519} 2023-07-16 18:15:58,963 INFO [master/jenkins-hbase4:0:becomeActiveMaster] server.Server(415): Started @43743ms 2023-07-16 18:15:58,963 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2168): Adding backup master ZNode /hbase/backup-masters/jenkins-hbase4.apache.org,32899,1689531358331 2023-07-16 18:15:58,965 DEBUG [Listener at localhost/42859-EventThread] zookeeper.ZKWatcher(600): master:32899-0x1016f591f660000, quorum=127.0.0.1:54881, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-16 18:15:58,966 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:32899-0x1016f591f660000, quorum=127.0.0.1:54881, baseZNode=/hbase Set watcher on existing znode=/hbase/backup-masters/jenkins-hbase4.apache.org,32899,1689531358331 2023-07-16 18:15:58,967 DEBUG [Listener at localhost/42859-EventThread] zookeeper.ZKWatcher(600): regionserver:43051-0x1016f591f660001, quorum=127.0.0.1:54881, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-16 18:15:58,967 DEBUG [Listener at localhost/42859-EventThread] zookeeper.ZKWatcher(600): regionserver:37401-0x1016f591f660002, quorum=127.0.0.1:54881, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-16 18:15:58,967 DEBUG [Listener at localhost/42859-EventThread] zookeeper.ZKWatcher(600): master:32899-0x1016f591f660000, quorum=127.0.0.1:54881, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-16 18:15:58,967 DEBUG [Listener at localhost/42859-EventThread] zookeeper.ZKWatcher(600): regionserver:35551-0x1016f591f660003, quorum=127.0.0.1:54881, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/master 2023-07-16 18:15:58,968 DEBUG [Listener at localhost/42859-EventThread] zookeeper.ZKWatcher(600): master:32899-0x1016f591f660000, quorum=127.0.0.1:54881, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-16 18:15:58,969 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(162): master:32899-0x1016f591f660000, quorum=127.0.0.1:54881, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-16 18:15:58,971 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(227): Deleting ZNode for /hbase/backup-masters/jenkins-hbase4.apache.org,32899,1689531358331 from backup master directory 2023-07-16 18:15:58,971 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): master:32899-0x1016f591f660000, quorum=127.0.0.1:54881, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-16 18:15:58,972 DEBUG [Listener at localhost/42859-EventThread] zookeeper.ZKWatcher(600): master:32899-0x1016f591f660000, quorum=127.0.0.1:54881, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/backup-masters/jenkins-hbase4.apache.org,32899,1689531358331 2023-07-16 18:15:58,972 WARN [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-16 18:15:58,972 DEBUG [Listener at localhost/42859-EventThread] zookeeper.ZKWatcher(600): master:32899-0x1016f591f660000, quorum=127.0.0.1:54881, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/backup-masters 2023-07-16 18:15:58,972 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ActiveMasterManager(237): Registered as active master=jenkins-hbase4.apache.org,32899,1689531358331 2023-07-16 18:15:58,986 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] util.FSUtils(620): Created cluster ID file at hdfs://localhost:39679/user/jenkins/test-data/46b660d4-bb55-7072-2fb4-691f81782058/hbase.id with ID: 551150bd-315d-4320-93d1-990373dd96e3 2023-07-16 18:15:58,997 INFO [master/jenkins-hbase4:0:becomeActiveMaster] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-16 18:15:59,000 DEBUG [Listener at localhost/42859-EventThread] zookeeper.ZKWatcher(600): master:32899-0x1016f591f660000, quorum=127.0.0.1:54881, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-16 18:15:59,012 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ReadOnlyZKClient(139): Connect 0x63fcf2f8 to 127.0.0.1:54881 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-16 18:15:59,017 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@2a22f7f5, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-16 18:15:59,017 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(309): Create or load local region for table 'master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-16 18:15:59,018 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(132): Injected flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000 2023-07-16 18:15:59,018 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-16 18:15:59,020 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7693): Creating {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='master:store', {NAME => 'proc', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, under table dir hdfs://localhost:39679/user/jenkins/test-data/46b660d4-bb55-7072-2fb4-691f81782058/MasterData/data/master/store-tmp 2023-07-16 18:15:59,031 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 18:15:59,031 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-16 18:15:59,031 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-16 18:15:59,031 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-16 18:15:59,031 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-16 18:15:59,031 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-16 18:15:59,031 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-16 18:15:59,031 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-16 18:15:59,031 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegion(191): WALDir=hdfs://localhost:39679/user/jenkins/test-data/46b660d4-bb55-7072-2fb4-691f81782058/MasterData/WALs/jenkins-hbase4.apache.org,32899,1689531358331 2023-07-16 18:15:59,034 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C32899%2C1689531358331, suffix=, logDir=hdfs://localhost:39679/user/jenkins/test-data/46b660d4-bb55-7072-2fb4-691f81782058/MasterData/WALs/jenkins-hbase4.apache.org,32899,1689531358331, archiveDir=hdfs://localhost:39679/user/jenkins/test-data/46b660d4-bb55-7072-2fb4-691f81782058/MasterData/oldWALs, maxLogs=10 2023-07-16 18:15:59,049 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:32865,DS-63b2dce7-ee25-4e15-8bc1-af885cb05c90,DISK] 2023-07-16 18:15:59,049 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38727,DS-e508de1e-751d-4f9c-86b1-7c85540a99e8,DISK] 2023-07-16 18:15:59,049 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41671,DS-772c744a-be2a-46db-aa0e-2db3a2bde8ed,DISK] 2023-07-16 18:15:59,051 INFO [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/46b660d4-bb55-7072-2fb4-691f81782058/MasterData/WALs/jenkins-hbase4.apache.org,32899,1689531358331/jenkins-hbase4.apache.org%2C32899%2C1689531358331.1689531359034 2023-07-16 18:15:59,052 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:32865,DS-63b2dce7-ee25-4e15-8bc1-af885cb05c90,DISK], DatanodeInfoWithStorage[127.0.0.1:38727,DS-e508de1e-751d-4f9c-86b1-7c85540a99e8,DISK], DatanodeInfoWithStorage[127.0.0.1:41671,DS-772c744a-be2a-46db-aa0e-2db3a2bde8ed,DISK]] 2023-07-16 18:15:59,052 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7854): Opening region: {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''} 2023-07-16 18:15:59,052 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(866): Instantiated master:store,,1.1595e783b53d99cd5eef43b6debb2682.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 18:15:59,052 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7894): checking encryption for 1595e783b53d99cd5eef43b6debb2682 2023-07-16 18:15:59,052 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(7897): checking classloading for 1595e783b53d99cd5eef43b6debb2682 2023-07-16 18:15:59,054 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family proc of region 1595e783b53d99cd5eef43b6debb2682 2023-07-16 18:15:59,056 DEBUG [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39679/user/jenkins/test-data/46b660d4-bb55-7072-2fb4-691f81782058/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc 2023-07-16 18:15:59,056 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1595e783b53d99cd5eef43b6debb2682 columnFamilyName proc 2023-07-16 18:15:59,056 INFO [StoreOpener-1595e783b53d99cd5eef43b6debb2682-1] regionserver.HStore(310): Store=1595e783b53d99cd5eef43b6debb2682/proc, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 18:15:59,057 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:39679/user/jenkins/test-data/46b660d4-bb55-7072-2fb4-691f81782058/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-16 18:15:59,057 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:39679/user/jenkins/test-data/46b660d4-bb55-7072-2fb4-691f81782058/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682 2023-07-16 18:15:59,059 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1055): writing seq id for 1595e783b53d99cd5eef43b6debb2682 2023-07-16 18:15:59,061 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:39679/user/jenkins/test-data/46b660d4-bb55-7072-2fb4-691f81782058/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-16 18:15:59,061 INFO [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(1072): Opened 1595e783b53d99cd5eef43b6debb2682; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10321943520, jitterRate=-0.038694098591804504}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 18:15:59,061 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] regionserver.HRegion(965): Region open journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-16 18:15:59,062 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.MasterRegionFlusherAndCompactor(122): Constructor flushSize=134217728, flushPerChanges=1000000, flushIntervalMs=900000, compactMin=4 2023-07-16 18:15:59,063 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(104): Starting the Region Procedure Store, number threads=5 2023-07-16 18:15:59,063 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(562): Starting 5 core workers (bigger of cpus/4 or 16) with max (burst) worker count=50 2023-07-16 18:15:59,063 INFO [master/jenkins-hbase4:0:becomeActiveMaster] region.RegionProcedureStore(255): Starting Region Procedure Store lease recovery... 2023-07-16 18:15:59,063 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(582): Recovered RegionProcedureStore lease in 0 msec 2023-07-16 18:15:59,064 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(596): Loaded RegionProcedureStore in 0 msec 2023-07-16 18:15:59,064 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.RemoteProcedureDispatcher(96): Instantiated, coreThreads=3 (allowCoreThreadTimeOut=true), queueMaxSize=32, operationDelay=150 2023-07-16 18:15:59,064 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(253): hbase:meta replica znodes: [] 2023-07-16 18:15:59,065 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.RegionServerTracker(124): Starting RegionServerTracker; 0 have existing ServerCrashProcedures, 0 possibly 'live' servers, and 0 'splitting'. 2023-07-16 18:15:59,066 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:32899-0x1016f591f660000, quorum=127.0.0.1:54881, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/balancer 2023-07-16 18:15:59,066 INFO [master/jenkins-hbase4:0:becomeActiveMaster] normalizer.RegionNormalizerWorker(118): Normalizer rate limit set to unlimited 2023-07-16 18:15:59,067 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:32899-0x1016f591f660000, quorum=127.0.0.1:54881, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/normalizer 2023-07-16 18:15:59,071 DEBUG [Listener at localhost/42859-EventThread] zookeeper.ZKWatcher(600): master:32899-0x1016f591f660000, quorum=127.0.0.1:54881, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-16 18:15:59,072 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:32899-0x1016f591f660000, quorum=127.0.0.1:54881, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/split 2023-07-16 18:15:59,072 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:32899-0x1016f591f660000, quorum=127.0.0.1:54881, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/switch/merge 2023-07-16 18:15:59,073 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:32899-0x1016f591f660000, quorum=127.0.0.1:54881, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/snapshot-cleanup 2023-07-16 18:15:59,074 DEBUG [Listener at localhost/42859-EventThread] zookeeper.ZKWatcher(600): regionserver:35551-0x1016f591f660003, quorum=127.0.0.1:54881, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-16 18:15:59,074 DEBUG [Listener at localhost/42859-EventThread] zookeeper.ZKWatcher(600): regionserver:37401-0x1016f591f660002, quorum=127.0.0.1:54881, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-16 18:15:59,074 DEBUG [Listener at localhost/42859-EventThread] zookeeper.ZKWatcher(600): master:32899-0x1016f591f660000, quorum=127.0.0.1:54881, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-16 18:15:59,074 DEBUG [Listener at localhost/42859-EventThread] zookeeper.ZKWatcher(600): regionserver:43051-0x1016f591f660001, quorum=127.0.0.1:54881, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/running 2023-07-16 18:15:59,075 DEBUG [Listener at localhost/42859-EventThread] zookeeper.ZKWatcher(600): master:32899-0x1016f591f660000, quorum=127.0.0.1:54881, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-16 18:15:59,075 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(744): Active/primary master=jenkins-hbase4.apache.org,32899,1689531358331, sessionid=0x1016f591f660000, setting cluster-up flag (Was=false) 2023-07-16 18:15:59,081 DEBUG [Listener at localhost/42859-EventThread] zookeeper.ZKWatcher(600): master:32899-0x1016f591f660000, quorum=127.0.0.1:54881, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-16 18:15:59,084 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/flush-table-proc/acquired, /hbase/flush-table-proc/reached, /hbase/flush-table-proc/abort 2023-07-16 18:15:59,085 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,32899,1689531358331 2023-07-16 18:15:59,096 DEBUG [Listener at localhost/42859-EventThread] zookeeper.ZKWatcher(600): master:32899-0x1016f591f660000, quorum=127.0.0.1:54881, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-16 18:15:59,101 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureUtil(251): Clearing all znodes /hbase/online-snapshot/acquired, /hbase/online-snapshot/reached, /hbase/online-snapshot/abort 2023-07-16 18:15:59,102 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure.ZKProcedureCoordinator(245): Starting controller for procedure member=jenkins-hbase4.apache.org,32899,1689531358331 2023-07-16 18:15:59,103 WARN [master/jenkins-hbase4:0:becomeActiveMaster] snapshot.SnapshotManager(302): Couldn't delete working snapshot directory: hdfs://localhost:39679/user/jenkins/test-data/46b660d4-bb55-7072-2fb4-691f81782058/.hbase-snapshot/.tmp 2023-07-16 18:15:59,104 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2930): Registered master coprocessor service: service=RSGroupAdminService 2023-07-16 18:15:59,104 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] rsgroup.RSGroupInfoManagerImpl(537): Refreshing in Offline mode. 2023-07-16 18:15:59,104 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint loaded, priority=536870911. 2023-07-16 18:15:59,105 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,32899,1689531358331] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-16 18:15:59,105 INFO [master/jenkins-hbase4:0:becomeActiveMaster] coprocessor.CoprocessorHost(174): System coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver loaded, priority=536870912. 2023-07-16 18:15:59,107 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT; InitMetaProcedure table=hbase:meta 2023-07-16 18:15:59,123 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-16 18:15:59,123 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-16 18:15:59,123 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.BaseLoadBalancer(1082): slop=0.001, systemTablesOnMaster=false 2023-07-16 18:15:59,124 INFO [master/jenkins-hbase4:0:becomeActiveMaster] balancer.StochasticLoadBalancer(253): Loaded config; maxSteps=1000000, runMaxSteps=false, stepsPerRegion=800, maxRunningTime=30000, isByTable=false, CostFunctions=[RegionCountSkewCostFunction, PrimaryRegionCountSkewCostFunction, MoveCostFunction, ServerLocalityCostFunction, RackLocalityCostFunction, TableSkewCostFunction, RegionReplicaHostCostFunction, RegionReplicaRackCostFunction, ReadRequestCostFunction, WriteRequestCostFunction, MemStoreSizeCostFunction, StoreFileCostFunction] , sum of multiplier of cost functions = 0.0 etc. 2023-07-16 18:15:59,124 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_OPEN_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-16 18:15:59,124 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_CLOSE_REGION-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-16 18:15:59,124 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-16 18:15:59,124 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_META_SERVER_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=5, maxPoolSize=5 2023-07-16 18:15:59,124 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=M_LOG_REPLAY_OPS-master/jenkins-hbase4:0, corePoolSize=10, maxPoolSize=10 2023-07-16 18:15:59,124 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_SNAPSHOT_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 18:15:59,124 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_MERGE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-16 18:15:59,124 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] executor.ExecutorService(93): Starting executor service name=MASTER_TABLE_OPERATIONS-master/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 18:15:59,142 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_WRITE_FS_LAYOUT, locked=true; InitMetaProcedure table=hbase:meta 2023-07-16 18:15:59,142 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.procedure2.CompletedProcedureCleaner; timeout=30000, timestamp=1689531389142 2023-07-16 18:15:59,142 INFO [PEWorker-1] procedure.InitMetaProcedure(71): BOOTSTRAP: creating hbase:meta region 2023-07-16 18:15:59,142 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): log_cleaner Cleaner pool size is 1 2023-07-16 18:15:59,143 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner 2023-07-16 18:15:59,143 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner 2023-07-16 18:15:59,143 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreWALCleaner 2023-07-16 18:15:59,143 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveProcedureWALCleaner 2023-07-16 18:15:59,143 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.LogCleaner(148): Creating 1 old WALs cleaner threads 2023-07-16 18:15:59,143 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=LogsCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-16 18:15:59,144 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.DirScanPool(70): hfile_cleaner Cleaner pool size is 2 2023-07-16 18:15:59,145 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveMasterLocalStoreHFileCleaner 2023-07-16 18:15:59,145 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner 2023-07-16 18:15:59,146 INFO [PEWorker-1] util.FSTableDescriptors(128): Creating new hbase:meta table descriptor 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-16 18:15:59,146 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner 2023-07-16 18:15:59,147 INFO [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.CleanerChore(175): Initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner 2023-07-16 18:15:59,158 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(242): Starting for large file=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689531359147,5,FailOnTimeoutGroup] 2023-07-16 18:15:59,165 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] cleaner.HFileCleaner(257): Starting for small files=Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689531359162,5,FailOnTimeoutGroup] 2023-07-16 18:15:59,165 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HFileCleaner, period=600000, unit=MILLISECONDS is enabled. 2023-07-16 18:15:59,165 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1461): Reopening regions with very high storeFileRefCount is disabled. Provide threshold value > 0 for hbase.regions.recovery.store.file.ref.count to enable it. 2023-07-16 18:15:59,165 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=ReplicationBarrierCleaner, period=43200000, unit=MILLISECONDS is enabled. 2023-07-16 18:15:59,165 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=SnapshotCleaner, period=1800000, unit=MILLISECONDS is enabled. 2023-07-16 18:15:59,166 INFO [RS:0;jenkins-hbase4:43051] regionserver.HRegionServer(951): ClusterId : 551150bd-315d-4320-93d1-990373dd96e3 2023-07-16 18:15:59,167 DEBUG [RS:0;jenkins-hbase4:43051] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-16 18:15:59,168 INFO [RS:2;jenkins-hbase4:35551] regionserver.HRegionServer(951): ClusterId : 551150bd-315d-4320-93d1-990373dd96e3 2023-07-16 18:15:59,168 INFO [RS:1;jenkins-hbase4:37401] regionserver.HRegionServer(951): ClusterId : 551150bd-315d-4320-93d1-990373dd96e3 2023-07-16 18:15:59,169 DEBUG [RS:2;jenkins-hbase4:35551] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-16 18:15:59,170 DEBUG [RS:1;jenkins-hbase4:37401] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-16 18:15:59,172 DEBUG [RS:0;jenkins-hbase4:43051] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-16 18:15:59,172 DEBUG [RS:0;jenkins-hbase4:43051] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-16 18:15:59,174 DEBUG [RS:2;jenkins-hbase4:35551] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-16 18:15:59,174 DEBUG [RS:2;jenkins-hbase4:35551] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-16 18:15:59,175 DEBUG [RS:1;jenkins-hbase4:37401] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-16 18:15:59,175 DEBUG [RS:1;jenkins-hbase4:37401] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-16 18:15:59,177 DEBUG [RS:0;jenkins-hbase4:43051] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-16 18:15:59,177 DEBUG [RS:2;jenkins-hbase4:35551] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-16 18:15:59,177 DEBUG [RS:1;jenkins-hbase4:37401] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-16 18:15:59,183 DEBUG [RS:0;jenkins-hbase4:43051] zookeeper.ReadOnlyZKClient(139): Connect 0x6e985cec to 127.0.0.1:54881 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-16 18:15:59,190 DEBUG [RS:2;jenkins-hbase4:35551] zookeeper.ReadOnlyZKClient(139): Connect 0x399d1e8b to 127.0.0.1:54881 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-16 18:15:59,191 DEBUG [RS:1;jenkins-hbase4:37401] zookeeper.ReadOnlyZKClient(139): Connect 0x7c93a891 to 127.0.0.1:54881 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-16 18:15:59,211 DEBUG [RS:0;jenkins-hbase4:43051] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@605fbf4f, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-16 18:15:59,212 DEBUG [RS:0;jenkins-hbase4:43051] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6460379c, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-16 18:15:59,213 DEBUG [PEWorker-1] util.FSTableDescriptors(570): Wrote into hdfs://localhost:39679/user/jenkins/test-data/46b660d4-bb55-7072-2fb4-691f81782058/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-16 18:15:59,214 INFO [PEWorker-1] util.FSTableDescriptors(135): Updated hbase:meta table descriptor to hdfs://localhost:39679/user/jenkins/test-data/46b660d4-bb55-7072-2fb4-691f81782058/data/hbase/meta/.tabledesc/.tableinfo.0000000001 2023-07-16 18:15:59,214 INFO [PEWorker-1] regionserver.HRegion(7675): creating {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}}, {NAME => 'info', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, {NAME => 'rep_barrier', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '2147483647', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, {NAME => 'table', BLOOMFILTER => 'NONE', IN_MEMORY => 'true', VERSIONS => '3', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:39679/user/jenkins/test-data/46b660d4-bb55-7072-2fb4-691f81782058 2023-07-16 18:15:59,215 DEBUG [RS:2;jenkins-hbase4:35551] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@7c9f97dc, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-16 18:15:59,215 DEBUG [RS:2;jenkins-hbase4:35551] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@64cdff19, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-16 18:15:59,219 DEBUG [RS:1;jenkins-hbase4:37401] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@556e956, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-16 18:15:59,220 DEBUG [RS:1;jenkins-hbase4:37401] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@4d25b0a4, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-16 18:15:59,226 DEBUG [RS:0;jenkins-hbase4:43051] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:0;jenkins-hbase4:43051 2023-07-16 18:15:59,226 INFO [RS:0;jenkins-hbase4:43051] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-16 18:15:59,226 INFO [RS:0;jenkins-hbase4:43051] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-16 18:15:59,226 DEBUG [RS:0;jenkins-hbase4:43051] regionserver.HRegionServer(1022): About to register with Master. 2023-07-16 18:15:59,227 INFO [RS:0;jenkins-hbase4:43051] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,32899,1689531358331 with isa=jenkins-hbase4.apache.org/172.31.14.131:43051, startcode=1689531358519 2023-07-16 18:15:59,227 DEBUG [RS:2;jenkins-hbase4:35551] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:2;jenkins-hbase4:35551 2023-07-16 18:15:59,227 INFO [RS:2;jenkins-hbase4:35551] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-16 18:15:59,227 INFO [RS:2;jenkins-hbase4:35551] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-16 18:15:59,227 DEBUG [RS:0;jenkins-hbase4:43051] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-16 18:15:59,227 DEBUG [RS:2;jenkins-hbase4:35551] regionserver.HRegionServer(1022): About to register with Master. 2023-07-16 18:15:59,228 INFO [RS:2;jenkins-hbase4:35551] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,32899,1689531358331 with isa=jenkins-hbase4.apache.org/172.31.14.131:35551, startcode=1689531358817 2023-07-16 18:15:59,228 DEBUG [RS:2;jenkins-hbase4:35551] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-16 18:15:59,229 INFO [RS-EventLoopGroup-12-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:43439, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.7 (auth:SIMPLE), service=RegionServerStatusService 2023-07-16 18:15:59,229 DEBUG [RS:1;jenkins-hbase4:37401] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:1;jenkins-hbase4:37401 2023-07-16 18:15:59,229 INFO [RS:1;jenkins-hbase4:37401] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-16 18:15:59,229 INFO [RS:1;jenkins-hbase4:37401] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-16 18:15:59,229 DEBUG [RS:1;jenkins-hbase4:37401] regionserver.HRegionServer(1022): About to register with Master. 2023-07-16 18:15:59,230 INFO [RS:1;jenkins-hbase4:37401] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,32899,1689531358331 with isa=jenkins-hbase4.apache.org/172.31.14.131:37401, startcode=1689531358669 2023-07-16 18:15:59,230 DEBUG [RS:1;jenkins-hbase4:37401] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-16 18:15:59,231 INFO [RS-EventLoopGroup-12-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:49553, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.9 (auth:SIMPLE), service=RegionServerStatusService 2023-07-16 18:15:59,231 INFO [RS-EventLoopGroup-12-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:45369, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.8 (auth:SIMPLE), service=RegionServerStatusService 2023-07-16 18:15:59,240 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=32899] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,43051,1689531358519 2023-07-16 18:15:59,240 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,32899,1689531358331] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-16 18:15:59,241 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,32899,1689531358331] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 1 2023-07-16 18:15:59,241 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=32899] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,35551,1689531358817 2023-07-16 18:15:59,241 DEBUG [RS:0;jenkins-hbase4:43051] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:39679/user/jenkins/test-data/46b660d4-bb55-7072-2fb4-691f81782058 2023-07-16 18:15:59,241 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,32899,1689531358331] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-16 18:15:59,241 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=32899] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,37401,1689531358669 2023-07-16 18:15:59,241 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,32899,1689531358331] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 2 2023-07-16 18:15:59,241 DEBUG [RS:0;jenkins-hbase4:43051] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:39679 2023-07-16 18:15:59,242 DEBUG [RS:2;jenkins-hbase4:35551] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:39679/user/jenkins/test-data/46b660d4-bb55-7072-2fb4-691f81782058 2023-07-16 18:15:59,242 DEBUG [RS:2;jenkins-hbase4:35551] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:39679 2023-07-16 18:15:59,241 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,32899,1689531358331] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-16 18:15:59,242 DEBUG [RS:2;jenkins-hbase4:35551] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=39933 2023-07-16 18:15:59,242 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,32899,1689531358331] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 3 2023-07-16 18:15:59,242 DEBUG [RS:0;jenkins-hbase4:43051] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=39933 2023-07-16 18:15:59,242 DEBUG [RS:1;jenkins-hbase4:37401] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:39679/user/jenkins/test-data/46b660d4-bb55-7072-2fb4-691f81782058 2023-07-16 18:15:59,242 DEBUG [RS:1;jenkins-hbase4:37401] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:39679 2023-07-16 18:15:59,242 DEBUG [RS:1;jenkins-hbase4:37401] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=39933 2023-07-16 18:15:59,248 DEBUG [PEWorker-1] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 18:15:59,249 DEBUG [Listener at localhost/42859-EventThread] zookeeper.ZKWatcher(600): master:32899-0x1016f591f660000, quorum=127.0.0.1:54881, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-16 18:15:59,252 DEBUG [RS:0;jenkins-hbase4:43051] zookeeper.ZKUtil(162): regionserver:43051-0x1016f591f660001, quorum=127.0.0.1:54881, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43051,1689531358519 2023-07-16 18:15:59,252 DEBUG [RS:2;jenkins-hbase4:35551] zookeeper.ZKUtil(162): regionserver:35551-0x1016f591f660003, quorum=127.0.0.1:54881, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35551,1689531358817 2023-07-16 18:15:59,252 WARN [RS:0;jenkins-hbase4:43051] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-16 18:15:59,252 WARN [RS:2;jenkins-hbase4:35551] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-16 18:15:59,252 INFO [RS:0;jenkins-hbase4:43051] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-16 18:15:59,252 DEBUG [RS:1;jenkins-hbase4:37401] zookeeper.ZKUtil(162): regionserver:37401-0x1016f591f660002, quorum=127.0.0.1:54881, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37401,1689531358669 2023-07-16 18:15:59,252 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,43051,1689531358519] 2023-07-16 18:15:59,252 WARN [RS:1;jenkins-hbase4:37401] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-16 18:15:59,252 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-16 18:15:59,252 INFO [RS:1;jenkins-hbase4:37401] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-16 18:15:59,252 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,35551,1689531358817] 2023-07-16 18:15:59,253 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,37401,1689531358669] 2023-07-16 18:15:59,253 DEBUG [RS:1;jenkins-hbase4:37401] regionserver.HRegionServer(1948): logDir=hdfs://localhost:39679/user/jenkins/test-data/46b660d4-bb55-7072-2fb4-691f81782058/WALs/jenkins-hbase4.apache.org,37401,1689531358669 2023-07-16 18:15:59,252 DEBUG [RS:0;jenkins-hbase4:43051] regionserver.HRegionServer(1948): logDir=hdfs://localhost:39679/user/jenkins/test-data/46b660d4-bb55-7072-2fb4-691f81782058/WALs/jenkins-hbase4.apache.org,43051,1689531358519 2023-07-16 18:15:59,252 INFO [RS:2;jenkins-hbase4:35551] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-16 18:15:59,253 DEBUG [RS:2;jenkins-hbase4:35551] regionserver.HRegionServer(1948): logDir=hdfs://localhost:39679/user/jenkins/test-data/46b660d4-bb55-7072-2fb4-691f81782058/WALs/jenkins-hbase4.apache.org,35551,1689531358817 2023-07-16 18:15:59,259 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39679/user/jenkins/test-data/46b660d4-bb55-7072-2fb4-691f81782058/data/hbase/meta/1588230740/info 2023-07-16 18:15:59,261 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-16 18:15:59,263 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 18:15:59,263 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-16 18:15:59,265 DEBUG [RS:0;jenkins-hbase4:43051] zookeeper.ZKUtil(162): regionserver:43051-0x1016f591f660001, quorum=127.0.0.1:54881, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35551,1689531358817 2023-07-16 18:15:59,265 DEBUG [RS:1;jenkins-hbase4:37401] zookeeper.ZKUtil(162): regionserver:37401-0x1016f591f660002, quorum=127.0.0.1:54881, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35551,1689531358817 2023-07-16 18:15:59,265 DEBUG [RS:2;jenkins-hbase4:35551] zookeeper.ZKUtil(162): regionserver:35551-0x1016f591f660003, quorum=127.0.0.1:54881, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35551,1689531358817 2023-07-16 18:15:59,265 DEBUG [RS:0;jenkins-hbase4:43051] zookeeper.ZKUtil(162): regionserver:43051-0x1016f591f660001, quorum=127.0.0.1:54881, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43051,1689531358519 2023-07-16 18:15:59,265 DEBUG [RS:1;jenkins-hbase4:37401] zookeeper.ZKUtil(162): regionserver:37401-0x1016f591f660002, quorum=127.0.0.1:54881, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43051,1689531358519 2023-07-16 18:15:59,266 DEBUG [RS:2;jenkins-hbase4:35551] zookeeper.ZKUtil(162): regionserver:35551-0x1016f591f660003, quorum=127.0.0.1:54881, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43051,1689531358519 2023-07-16 18:15:59,266 DEBUG [RS:0;jenkins-hbase4:43051] zookeeper.ZKUtil(162): regionserver:43051-0x1016f591f660001, quorum=127.0.0.1:54881, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37401,1689531358669 2023-07-16 18:15:59,266 DEBUG [RS:1;jenkins-hbase4:37401] zookeeper.ZKUtil(162): regionserver:37401-0x1016f591f660002, quorum=127.0.0.1:54881, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37401,1689531358669 2023-07-16 18:15:59,266 DEBUG [RS:2;jenkins-hbase4:35551] zookeeper.ZKUtil(162): regionserver:35551-0x1016f591f660003, quorum=127.0.0.1:54881, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37401,1689531358669 2023-07-16 18:15:59,267 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39679/user/jenkins/test-data/46b660d4-bb55-7072-2fb4-691f81782058/data/hbase/meta/1588230740/rep_barrier 2023-07-16 18:15:59,267 DEBUG [RS:0;jenkins-hbase4:43051] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-16 18:15:59,267 DEBUG [RS:1;jenkins-hbase4:37401] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-16 18:15:59,267 INFO [RS:0;jenkins-hbase4:43051] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-16 18:15:59,267 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-16 18:15:59,267 INFO [RS:1;jenkins-hbase4:37401] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-16 18:15:59,268 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 18:15:59,268 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-16 18:15:59,269 DEBUG [RS:2;jenkins-hbase4:35551] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-16 18:15:59,269 INFO [RS:2;jenkins-hbase4:35551] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-16 18:15:59,269 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39679/user/jenkins/test-data/46b660d4-bb55-7072-2fb4-691f81782058/data/hbase/meta/1588230740/table 2023-07-16 18:15:59,270 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-16 18:15:59,270 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 18:15:59,271 INFO [RS:0;jenkins-hbase4:43051] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-16 18:15:59,275 INFO [RS:1;jenkins-hbase4:37401] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-16 18:15:59,275 INFO [RS:2;jenkins-hbase4:35551] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-16 18:15:59,275 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:39679/user/jenkins/test-data/46b660d4-bb55-7072-2fb4-691f81782058/data/hbase/meta/1588230740 2023-07-16 18:15:59,276 DEBUG [PEWorker-1] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:39679/user/jenkins/test-data/46b660d4-bb55-7072-2fb4-691f81782058/data/hbase/meta/1588230740 2023-07-16 18:15:59,278 DEBUG [PEWorker-1] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-16 18:15:59,279 INFO [RS:1;jenkins-hbase4:37401] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-16 18:15:59,279 INFO [RS:0;jenkins-hbase4:43051] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-16 18:15:59,279 INFO [RS:2;jenkins-hbase4:35551] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-16 18:15:59,279 INFO [RS:2;jenkins-hbase4:35551] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-16 18:15:59,279 INFO [RS:1;jenkins-hbase4:37401] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-16 18:15:59,279 INFO [RS:0;jenkins-hbase4:43051] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-16 18:15:59,280 DEBUG [PEWorker-1] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-16 18:15:59,282 INFO [RS:2;jenkins-hbase4:35551] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-16 18:15:59,288 INFO [RS:1;jenkins-hbase4:37401] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-16 18:15:59,288 INFO [RS:2;jenkins-hbase4:35551] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-16 18:15:59,288 INFO [RS:0;jenkins-hbase4:43051] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-16 18:15:59,288 DEBUG [RS:2;jenkins-hbase4:35551] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 18:15:59,289 DEBUG [RS:2;jenkins-hbase4:35551] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 18:15:59,290 DEBUG [PEWorker-1] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:39679/user/jenkins/test-data/46b660d4-bb55-7072-2fb4-691f81782058/data/hbase/meta/1588230740/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-16 18:15:59,290 DEBUG [RS:2;jenkins-hbase4:35551] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 18:15:59,290 DEBUG [RS:2;jenkins-hbase4:35551] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 18:15:59,290 DEBUG [RS:2;jenkins-hbase4:35551] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 18:15:59,290 INFO [RS:1;jenkins-hbase4:37401] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-16 18:15:59,290 DEBUG [RS:2;jenkins-hbase4:35551] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-16 18:15:59,290 DEBUG [RS:1;jenkins-hbase4:37401] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 18:15:59,290 DEBUG [RS:2;jenkins-hbase4:35551] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 18:15:59,291 DEBUG [RS:1;jenkins-hbase4:37401] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 18:15:59,291 DEBUG [RS:2;jenkins-hbase4:35551] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 18:15:59,291 DEBUG [RS:1;jenkins-hbase4:37401] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 18:15:59,291 DEBUG [RS:1;jenkins-hbase4:37401] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 18:15:59,291 DEBUG [RS:1;jenkins-hbase4:37401] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 18:15:59,292 DEBUG [RS:1;jenkins-hbase4:37401] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-16 18:15:59,292 DEBUG [RS:1;jenkins-hbase4:37401] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 18:15:59,292 DEBUG [RS:1;jenkins-hbase4:37401] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 18:15:59,292 DEBUG [RS:1;jenkins-hbase4:37401] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 18:15:59,292 DEBUG [RS:1;jenkins-hbase4:37401] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 18:15:59,291 DEBUG [RS:2;jenkins-hbase4:35551] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 18:15:59,291 INFO [PEWorker-1] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10790933120, jitterRate=0.004983961582183838}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-16 18:15:59,292 DEBUG [RS:2;jenkins-hbase4:35551] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 18:15:59,292 DEBUG [PEWorker-1] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-16 18:15:59,292 INFO [RS:0;jenkins-hbase4:43051] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-16 18:15:59,292 DEBUG [PEWorker-1] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-16 18:15:59,292 INFO [PEWorker-1] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-16 18:15:59,292 DEBUG [PEWorker-1] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-16 18:15:59,292 DEBUG [PEWorker-1] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-16 18:15:59,292 DEBUG [PEWorker-1] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-16 18:15:59,299 DEBUG [RS:0;jenkins-hbase4:43051] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 18:15:59,299 DEBUG [RS:0;jenkins-hbase4:43051] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 18:15:59,299 DEBUG [RS:0;jenkins-hbase4:43051] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 18:15:59,299 INFO [RS:1;jenkins-hbase4:37401] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-16 18:15:59,300 DEBUG [RS:0;jenkins-hbase4:43051] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 18:15:59,300 INFO [RS:1;jenkins-hbase4:37401] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-16 18:15:59,300 DEBUG [RS:0;jenkins-hbase4:43051] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 18:15:59,300 INFO [RS:1;jenkins-hbase4:37401] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-16 18:15:59,300 DEBUG [RS:0;jenkins-hbase4:43051] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-16 18:15:59,300 DEBUG [RS:0;jenkins-hbase4:43051] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 18:15:59,300 DEBUG [RS:0;jenkins-hbase4:43051] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 18:15:59,300 DEBUG [RS:0;jenkins-hbase4:43051] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 18:15:59,300 DEBUG [RS:0;jenkins-hbase4:43051] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 18:15:59,303 INFO [PEWorker-1] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-16 18:15:59,303 DEBUG [PEWorker-1] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-16 18:15:59,306 DEBUG [PEWorker-1] procedure.InitMetaProcedure(92): Execute pid=1, state=RUNNABLE:INIT_META_ASSIGN_META, locked=true; InitMetaProcedure table=hbase:meta 2023-07-16 18:15:59,306 INFO [PEWorker-1] procedure.InitMetaProcedure(103): Going to assign meta 2023-07-16 18:15:59,306 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN}] 2023-07-16 18:15:59,314 INFO [RS:2;jenkins-hbase4:35551] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-16 18:15:59,315 INFO [RS:2;jenkins-hbase4:35551] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-16 18:15:59,315 INFO [RS:2;jenkins-hbase4:35551] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-16 18:15:59,315 INFO [RS:0;jenkins-hbase4:43051] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-16 18:15:59,315 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN 2023-07-16 18:15:59,315 INFO [RS:0;jenkins-hbase4:43051] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-16 18:15:59,315 INFO [RS:0;jenkins-hbase4:43051] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-16 18:15:59,321 INFO [RS:1;jenkins-hbase4:37401] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-16 18:15:59,322 INFO [RS:1;jenkins-hbase4:37401] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,37401,1689531358669-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-16 18:15:59,322 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=2, ppid=1, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN; state=OFFLINE, location=null; forceNewPlan=false, retain=false 2023-07-16 18:15:59,331 INFO [RS:2;jenkins-hbase4:35551] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-16 18:15:59,331 INFO [RS:2;jenkins-hbase4:35551] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,35551,1689531358817-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-16 18:15:59,358 INFO [RS:1;jenkins-hbase4:37401] regionserver.Replication(203): jenkins-hbase4.apache.org,37401,1689531358669 started 2023-07-16 18:15:59,359 INFO [RS:1;jenkins-hbase4:37401] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,37401,1689531358669, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:37401, sessionid=0x1016f591f660002 2023-07-16 18:15:59,359 DEBUG [RS:1;jenkins-hbase4:37401] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-16 18:15:59,359 DEBUG [RS:1;jenkins-hbase4:37401] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,37401,1689531358669 2023-07-16 18:15:59,359 DEBUG [RS:1;jenkins-hbase4:37401] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,37401,1689531358669' 2023-07-16 18:15:59,359 DEBUG [RS:1;jenkins-hbase4:37401] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-16 18:15:59,359 DEBUG [RS:1;jenkins-hbase4:37401] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-16 18:15:59,360 DEBUG [RS:1;jenkins-hbase4:37401] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-16 18:15:59,360 DEBUG [RS:1;jenkins-hbase4:37401] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-16 18:15:59,360 DEBUG [RS:1;jenkins-hbase4:37401] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,37401,1689531358669 2023-07-16 18:15:59,360 DEBUG [RS:1;jenkins-hbase4:37401] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,37401,1689531358669' 2023-07-16 18:15:59,360 DEBUG [RS:1;jenkins-hbase4:37401] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-16 18:15:59,360 DEBUG [RS:1;jenkins-hbase4:37401] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-16 18:15:59,360 DEBUG [RS:1;jenkins-hbase4:37401] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-16 18:15:59,360 INFO [RS:1;jenkins-hbase4:37401] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-16 18:15:59,360 INFO [RS:1;jenkins-hbase4:37401] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-16 18:15:59,365 INFO [RS:0;jenkins-hbase4:43051] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-16 18:15:59,365 INFO [RS:0;jenkins-hbase4:43051] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,43051,1689531358519-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-16 18:15:59,375 INFO [RS:2;jenkins-hbase4:35551] regionserver.Replication(203): jenkins-hbase4.apache.org,35551,1689531358817 started 2023-07-16 18:15:59,375 INFO [RS:2;jenkins-hbase4:35551] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,35551,1689531358817, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:35551, sessionid=0x1016f591f660003 2023-07-16 18:15:59,375 DEBUG [RS:2;jenkins-hbase4:35551] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-16 18:15:59,376 DEBUG [RS:2;jenkins-hbase4:35551] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,35551,1689531358817 2023-07-16 18:15:59,376 DEBUG [RS:2;jenkins-hbase4:35551] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,35551,1689531358817' 2023-07-16 18:15:59,376 DEBUG [RS:2;jenkins-hbase4:35551] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-16 18:15:59,376 DEBUG [RS:2;jenkins-hbase4:35551] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-16 18:15:59,377 DEBUG [RS:2;jenkins-hbase4:35551] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-16 18:15:59,377 DEBUG [RS:2;jenkins-hbase4:35551] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-16 18:15:59,377 DEBUG [RS:2;jenkins-hbase4:35551] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,35551,1689531358817 2023-07-16 18:15:59,377 DEBUG [RS:2;jenkins-hbase4:35551] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,35551,1689531358817' 2023-07-16 18:15:59,377 DEBUG [RS:2;jenkins-hbase4:35551] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-16 18:15:59,377 DEBUG [RS:2;jenkins-hbase4:35551] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-16 18:15:59,377 DEBUG [RS:2;jenkins-hbase4:35551] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-16 18:15:59,377 INFO [RS:2;jenkins-hbase4:35551] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-16 18:15:59,377 INFO [RS:2;jenkins-hbase4:35551] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-16 18:15:59,380 INFO [RS:0;jenkins-hbase4:43051] regionserver.Replication(203): jenkins-hbase4.apache.org,43051,1689531358519 started 2023-07-16 18:15:59,380 INFO [RS:0;jenkins-hbase4:43051] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,43051,1689531358519, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:43051, sessionid=0x1016f591f660001 2023-07-16 18:15:59,380 DEBUG [RS:0;jenkins-hbase4:43051] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-16 18:15:59,380 DEBUG [RS:0;jenkins-hbase4:43051] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,43051,1689531358519 2023-07-16 18:15:59,380 DEBUG [RS:0;jenkins-hbase4:43051] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,43051,1689531358519' 2023-07-16 18:15:59,380 DEBUG [RS:0;jenkins-hbase4:43051] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-16 18:15:59,381 DEBUG [RS:0;jenkins-hbase4:43051] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-16 18:15:59,381 DEBUG [RS:0;jenkins-hbase4:43051] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-16 18:15:59,382 DEBUG [RS:0;jenkins-hbase4:43051] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-16 18:15:59,382 DEBUG [RS:0;jenkins-hbase4:43051] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,43051,1689531358519 2023-07-16 18:15:59,382 DEBUG [RS:0;jenkins-hbase4:43051] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,43051,1689531358519' 2023-07-16 18:15:59,382 DEBUG [RS:0;jenkins-hbase4:43051] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-16 18:15:59,382 DEBUG [RS:0;jenkins-hbase4:43051] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-16 18:15:59,383 DEBUG [RS:0;jenkins-hbase4:43051] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-16 18:15:59,383 INFO [RS:0;jenkins-hbase4:43051] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-16 18:15:59,383 INFO [RS:0;jenkins-hbase4:43051] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-16 18:15:59,463 INFO [RS:1;jenkins-hbase4:37401] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C37401%2C1689531358669, suffix=, logDir=hdfs://localhost:39679/user/jenkins/test-data/46b660d4-bb55-7072-2fb4-691f81782058/WALs/jenkins-hbase4.apache.org,37401,1689531358669, archiveDir=hdfs://localhost:39679/user/jenkins/test-data/46b660d4-bb55-7072-2fb4-691f81782058/oldWALs, maxLogs=32 2023-07-16 18:15:59,479 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41671,DS-772c744a-be2a-46db-aa0e-2db3a2bde8ed,DISK] 2023-07-16 18:15:59,480 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38727,DS-e508de1e-751d-4f9c-86b1-7c85540a99e8,DISK] 2023-07-16 18:15:59,480 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:32865,DS-63b2dce7-ee25-4e15-8bc1-af885cb05c90,DISK] 2023-07-16 18:15:59,482 INFO [RS:2;jenkins-hbase4:35551] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C35551%2C1689531358817, suffix=, logDir=hdfs://localhost:39679/user/jenkins/test-data/46b660d4-bb55-7072-2fb4-691f81782058/WALs/jenkins-hbase4.apache.org,35551,1689531358817, archiveDir=hdfs://localhost:39679/user/jenkins/test-data/46b660d4-bb55-7072-2fb4-691f81782058/oldWALs, maxLogs=32 2023-07-16 18:15:59,484 INFO [RS:0;jenkins-hbase4:43051] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C43051%2C1689531358519, suffix=, logDir=hdfs://localhost:39679/user/jenkins/test-data/46b660d4-bb55-7072-2fb4-691f81782058/WALs/jenkins-hbase4.apache.org,43051,1689531358519, archiveDir=hdfs://localhost:39679/user/jenkins/test-data/46b660d4-bb55-7072-2fb4-691f81782058/oldWALs, maxLogs=32 2023-07-16 18:15:59,492 INFO [RS:1;jenkins-hbase4:37401] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/46b660d4-bb55-7072-2fb4-691f81782058/WALs/jenkins-hbase4.apache.org,37401,1689531358669/jenkins-hbase4.apache.org%2C37401%2C1689531358669.1689531359463 2023-07-16 18:15:59,492 DEBUG [RS:1;jenkins-hbase4:37401] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:41671,DS-772c744a-be2a-46db-aa0e-2db3a2bde8ed,DISK], DatanodeInfoWithStorage[127.0.0.1:32865,DS-63b2dce7-ee25-4e15-8bc1-af885cb05c90,DISK], DatanodeInfoWithStorage[127.0.0.1:38727,DS-e508de1e-751d-4f9c-86b1-7c85540a99e8,DISK]] 2023-07-16 18:15:59,499 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:32865,DS-63b2dce7-ee25-4e15-8bc1-af885cb05c90,DISK] 2023-07-16 18:15:59,499 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41671,DS-772c744a-be2a-46db-aa0e-2db3a2bde8ed,DISK] 2023-07-16 18:15:59,499 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38727,DS-e508de1e-751d-4f9c-86b1-7c85540a99e8,DISK] 2023-07-16 18:15:59,504 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38727,DS-e508de1e-751d-4f9c-86b1-7c85540a99e8,DISK] 2023-07-16 18:15:59,504 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:32865,DS-63b2dce7-ee25-4e15-8bc1-af885cb05c90,DISK] 2023-07-16 18:15:59,505 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41671,DS-772c744a-be2a-46db-aa0e-2db3a2bde8ed,DISK] 2023-07-16 18:15:59,505 DEBUG [jenkins-hbase4:32899] assignment.AssignmentManager(2176): Processing assignQueue; systemServersCount=3, allServersCount=3 2023-07-16 18:15:59,505 DEBUG [jenkins-hbase4:32899] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-16 18:15:59,505 DEBUG [jenkins-hbase4:32899] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-16 18:15:59,505 DEBUG [jenkins-hbase4:32899] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-16 18:15:59,505 DEBUG [jenkins-hbase4:32899] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-16 18:15:59,505 DEBUG [jenkins-hbase4:32899] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-16 18:15:59,507 INFO [PEWorker-3] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,37401,1689531358669, state=OPENING 2023-07-16 18:15:59,508 INFO [RS:0;jenkins-hbase4:43051] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/46b660d4-bb55-7072-2fb4-691f81782058/WALs/jenkins-hbase4.apache.org,43051,1689531358519/jenkins-hbase4.apache.org%2C43051%2C1689531358519.1689531359485 2023-07-16 18:15:59,508 DEBUG [RS:0;jenkins-hbase4:43051] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:32865,DS-63b2dce7-ee25-4e15-8bc1-af885cb05c90,DISK], DatanodeInfoWithStorage[127.0.0.1:38727,DS-e508de1e-751d-4f9c-86b1-7c85540a99e8,DISK], DatanodeInfoWithStorage[127.0.0.1:41671,DS-772c744a-be2a-46db-aa0e-2db3a2bde8ed,DISK]] 2023-07-16 18:15:59,508 INFO [RS:2;jenkins-hbase4:35551] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/46b660d4-bb55-7072-2fb4-691f81782058/WALs/jenkins-hbase4.apache.org,35551,1689531358817/jenkins-hbase4.apache.org%2C35551%2C1689531358817.1689531359482 2023-07-16 18:15:59,509 DEBUG [PEWorker-3] zookeeper.MetaTableLocator(240): hbase:meta region location doesn't exist, create it 2023-07-16 18:15:59,511 DEBUG [Listener at localhost/42859-EventThread] zookeeper.ZKWatcher(600): master:32899-0x1016f591f660000, quorum=127.0.0.1:54881, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-16 18:15:59,511 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-16 18:15:59,511 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=3, ppid=2, state=RUNNABLE; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,37401,1689531358669}] 2023-07-16 18:15:59,511 DEBUG [RS:2;jenkins-hbase4:35551] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:38727,DS-e508de1e-751d-4f9c-86b1-7c85540a99e8,DISK], DatanodeInfoWithStorage[127.0.0.1:32865,DS-63b2dce7-ee25-4e15-8bc1-af885cb05c90,DISK], DatanodeInfoWithStorage[127.0.0.1:41671,DS-772c744a-be2a-46db-aa0e-2db3a2bde8ed,DISK]] 2023-07-16 18:15:59,666 DEBUG [RSProcedureDispatcher-pool-0] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,37401,1689531358669 2023-07-16 18:15:59,666 DEBUG [RSProcedureDispatcher-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-16 18:15:59,668 INFO [RS-EventLoopGroup-14-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:33050, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-16 18:15:59,672 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:meta,,1.1588230740 2023-07-16 18:15:59,672 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-16 18:15:59,673 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C37401%2C1689531358669.meta, suffix=.meta, logDir=hdfs://localhost:39679/user/jenkins/test-data/46b660d4-bb55-7072-2fb4-691f81782058/WALs/jenkins-hbase4.apache.org,37401,1689531358669, archiveDir=hdfs://localhost:39679/user/jenkins/test-data/46b660d4-bb55-7072-2fb4-691f81782058/oldWALs, maxLogs=32 2023-07-16 18:15:59,690 DEBUG [RS-EventLoopGroup-15-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:32865,DS-63b2dce7-ee25-4e15-8bc1-af885cb05c90,DISK] 2023-07-16 18:15:59,690 DEBUG [RS-EventLoopGroup-15-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38727,DS-e508de1e-751d-4f9c-86b1-7c85540a99e8,DISK] 2023-07-16 18:15:59,691 DEBUG [RS-EventLoopGroup-15-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41671,DS-772c744a-be2a-46db-aa0e-2db3a2bde8ed,DISK] 2023-07-16 18:15:59,693 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/46b660d4-bb55-7072-2fb4-691f81782058/WALs/jenkins-hbase4.apache.org,37401,1689531358669/jenkins-hbase4.apache.org%2C37401%2C1689531358669.meta.1689531359673.meta 2023-07-16 18:15:59,693 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:38727,DS-e508de1e-751d-4f9c-86b1-7c85540a99e8,DISK], DatanodeInfoWithStorage[127.0.0.1:32865,DS-63b2dce7-ee25-4e15-8bc1-af885cb05c90,DISK], DatanodeInfoWithStorage[127.0.0.1:41671,DS-772c744a-be2a-46db-aa0e-2db3a2bde8ed,DISK]] 2023-07-16 18:15:59,694 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''} 2023-07-16 18:15:59,694 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-16 18:15:59,694 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:meta,,1 service=MultiRowMutationService 2023-07-16 18:15:59,694 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully. 2023-07-16 18:15:59,694 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table meta 1588230740 2023-07-16 18:15:59,694 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:meta,,1.1588230740; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 18:15:59,694 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 1588230740 2023-07-16 18:15:59,694 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 1588230740 2023-07-16 18:15:59,698 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 1588230740 2023-07-16 18:15:59,699 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39679/user/jenkins/test-data/46b660d4-bb55-7072-2fb4-691f81782058/data/hbase/meta/1588230740/info 2023-07-16 18:15:59,700 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39679/user/jenkins/test-data/46b660d4-bb55-7072-2fb4-691f81782058/data/hbase/meta/1588230740/info 2023-07-16 18:15:59,700 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName info 2023-07-16 18:15:59,701 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 18:15:59,701 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family rep_barrier of region 1588230740 2023-07-16 18:15:59,702 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39679/user/jenkins/test-data/46b660d4-bb55-7072-2fb4-691f81782058/data/hbase/meta/1588230740/rep_barrier 2023-07-16 18:15:59,702 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39679/user/jenkins/test-data/46b660d4-bb55-7072-2fb4-691f81782058/data/hbase/meta/1588230740/rep_barrier 2023-07-16 18:15:59,702 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName rep_barrier 2023-07-16 18:15:59,703 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/rep_barrier, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 18:15:59,703 INFO [StoreOpener-1588230740-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family table of region 1588230740 2023-07-16 18:15:59,704 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39679/user/jenkins/test-data/46b660d4-bb55-7072-2fb4-691f81782058/data/hbase/meta/1588230740/table 2023-07-16 18:15:59,704 DEBUG [StoreOpener-1588230740-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39679/user/jenkins/test-data/46b660d4-bb55-7072-2fb4-691f81782058/data/hbase/meta/1588230740/table 2023-07-16 18:15:59,704 INFO [StoreOpener-1588230740-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 1588230740 columnFamilyName table 2023-07-16 18:15:59,705 INFO [StoreOpener-1588230740-1] regionserver.HStore(310): Store=1588230740/table, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 18:15:59,706 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:39679/user/jenkins/test-data/46b660d4-bb55-7072-2fb4-691f81782058/data/hbase/meta/1588230740 2023-07-16 18:15:59,707 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:39679/user/jenkins/test-data/46b660d4-bb55-7072-2fb4-691f81782058/data/hbase/meta/1588230740 2023-07-16 18:15:59,709 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.FlushLargeStoresPolicy(65): No hbase.hregion.percolumnfamilyflush.size.lower.bound set in table hbase:meta descriptor;using region.getMemStoreFlushHeapSize/# of families (42.7 M)) instead. 2023-07-16 18:15:59,710 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 1588230740 2023-07-16 18:15:59,711 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 1588230740; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10148251520, jitterRate=-0.054870426654815674}}}, FlushLargeStoresPolicy{flushSizeLowerBound=44739242} 2023-07-16 18:15:59,711 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 1588230740: 2023-07-16 18:15:59,712 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:meta,,1.1588230740, pid=3, masterSystemTime=1689531359666 2023-07-16 18:15:59,717 DEBUG [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:meta,,1.1588230740 2023-07-16 18:15:59,717 INFO [RS_OPEN_META-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:meta,,1.1588230740 2023-07-16 18:15:59,717 WARN [ReadOnlyZKClient-127.0.0.1:54881@0x63fcf2f8] client.ZKConnectionRegistry(168): Meta region is in state OPENING 2023-07-16 18:15:59,718 INFO [PEWorker-5] zookeeper.MetaTableLocator(228): Setting hbase:meta replicaId=0 location in ZooKeeper as jenkins-hbase4.apache.org,37401,1689531358669, state=OPEN 2023-07-16 18:15:59,719 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,32899,1689531358331] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-16 18:15:59,720 DEBUG [Listener at localhost/42859-EventThread] zookeeper.ZKWatcher(600): master:32899-0x1016f591f660000, quorum=127.0.0.1:54881, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/meta-region-server 2023-07-16 18:15:59,720 DEBUG [zk-event-processor-pool-0] master.MetaRegionLocationCache(164): Updating meta znode for path /hbase/meta-region-server: CHANGED 2023-07-16 18:15:59,720 INFO [RS-EventLoopGroup-14-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:33054, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-16 18:15:59,722 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,32899,1689531358331] master.HMaster(2148): Client=null/null create 'hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-16 18:15:59,723 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=3, resume processing ppid=2 2023-07-16 18:15:59,723 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=3, ppid=2, state=SUCCESS; OpenRegionProcedure 1588230740, server=jenkins-hbase4.apache.org,37401,1689531358669 in 209 msec 2023-07-16 18:15:59,724 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,32899,1689531358331] procedure2.ProcedureExecutor(1029): Stored pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-16 18:15:59,725 DEBUG [PEWorker-2] procedure2.ProcedureExecutor(1400): LOCK_EVENT_WAIT pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:rsgroup 2023-07-16 18:15:59,726 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=2, resume processing ppid=1 2023-07-16 18:15:59,726 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=2, ppid=1, state=SUCCESS; TransitRegionStateProcedure table=hbase:meta, region=1588230740, ASSIGN in 417 msec 2023-07-16 18:15:59,728 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=1, state=SUCCESS; InitMetaProcedure table=hbase:meta in 621 msec 2023-07-16 18:15:59,728 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(953): Wait for region servers to report in: status=null, state=RUNNING, startTime=1689531359728, completionTime=-1 2023-07-16 18:15:59,728 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.ServerManager(821): Finished waiting on RegionServer count=3; waited=0ms, expected min=3 server(s), max=3 server(s), master is running 2023-07-16 18:15:59,728 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1517): Joining cluster... 2023-07-16 18:15:59,730 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1529): Number of RegionServers=3 2023-07-16 18:15:59,730 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$RegionInTransitionChore; timeout=60000, timestamp=1689531419730 2023-07-16 18:15:59,730 INFO [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.TimeoutExecutorThread(81): ADDED pid=-1, state=WAITING_TIMEOUT; org.apache.hadoop.hbase.master.assignment.AssignmentManager$DeadServerMetricRegionChore; timeout=120000, timestamp=1689531479730 2023-07-16 18:15:59,730 INFO [master/jenkins-hbase4:0:becomeActiveMaster] assignment.AssignmentManager(1536): Joined the cluster in 1 msec 2023-07-16 18:15:59,734 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,32899,1689531358331-ClusterStatusChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-16 18:15:59,734 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,32899,1689531358331-BalancerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-16 18:15:59,734 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,32899,1689531358331-RegionNormalizerChore, period=300000, unit=MILLISECONDS is enabled. 2023-07-16 18:15:59,735 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=CatalogJanitor-jenkins-hbase4:32899, period=300000, unit=MILLISECONDS is enabled. 2023-07-16 18:15:59,735 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=HbckChore-, period=3600000, unit=MILLISECONDS is enabled. 2023-07-16 18:15:59,735 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.TableNamespaceManager(92): Namespace table not found. Creating... 2023-07-16 18:15:59,735 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(2148): Client=null/null create 'hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'} 2023-07-16 18:15:59,735 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_PRE_OPERATION 2023-07-16 18:15:59,736 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=5, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=hbase:namespace 2023-07-16 18:15:59,736 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-16 18:15:59,737 DEBUG [HFileArchiver-3] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:39679/user/jenkins/test-data/46b660d4-bb55-7072-2fb4-691f81782058/.tmp/data/hbase/rsgroup/c09408583a4b3a7e05655ca4c086b84f 2023-07-16 18:15:59,738 DEBUG [HFileArchiver-3] backup.HFileArchiver(153): Directory hdfs://localhost:39679/user/jenkins/test-data/46b660d4-bb55-7072-2fb4-691f81782058/.tmp/data/hbase/rsgroup/c09408583a4b3a7e05655ca4c086b84f empty. 2023-07-16 18:15:59,738 DEBUG [HFileArchiver-3] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:39679/user/jenkins/test-data/46b660d4-bb55-7072-2fb4-691f81782058/.tmp/data/hbase/rsgroup/c09408583a4b3a7e05655ca4c086b84f 2023-07-16 18:15:59,738 DEBUG [PEWorker-4] procedure.DeleteTableProcedure(328): Archived hbase:rsgroup regions 2023-07-16 18:15:59,742 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_PRE_OPERATION 2023-07-16 18:15:59,742 DEBUG [master/jenkins-hbase4:0.Chore.1] janitor.CatalogJanitor(175): 2023-07-16 18:15:59,742 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-16 18:15:59,744 DEBUG [HFileArchiver-1] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:39679/user/jenkins/test-data/46b660d4-bb55-7072-2fb4-691f81782058/.tmp/data/hbase/namespace/5e547c590a8a20119ab4e8cece71317b 2023-07-16 18:15:59,744 DEBUG [HFileArchiver-1] backup.HFileArchiver(153): Directory hdfs://localhost:39679/user/jenkins/test-data/46b660d4-bb55-7072-2fb4-691f81782058/.tmp/data/hbase/namespace/5e547c590a8a20119ab4e8cece71317b empty. 2023-07-16 18:15:59,745 DEBUG [HFileArchiver-1] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:39679/user/jenkins/test-data/46b660d4-bb55-7072-2fb4-691f81782058/.tmp/data/hbase/namespace/5e547c590a8a20119ab4e8cece71317b 2023-07-16 18:15:59,745 DEBUG [PEWorker-5] procedure.DeleteTableProcedure(328): Archived hbase:namespace regions 2023-07-16 18:15:59,762 DEBUG [PEWorker-4] util.FSTableDescriptors(570): Wrote into hdfs://localhost:39679/user/jenkins/test-data/46b660d4-bb55-7072-2fb4-691f81782058/.tmp/data/hbase/rsgroup/.tabledesc/.tableinfo.0000000001 2023-07-16 18:15:59,767 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(7675): creating {ENCODED => c09408583a4b3a7e05655ca4c086b84f, NAME => 'hbase:rsgroup,,1689531359722.c09408583a4b3a7e05655ca4c086b84f.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:rsgroup', {TABLE_ATTRIBUTES => {coprocessor$1 => '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|', METADATA => {'SPLIT_POLICY' => 'org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy'}}}, {NAME => 'm', BLOOMFILTER => 'ROW', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:39679/user/jenkins/test-data/46b660d4-bb55-7072-2fb4-691f81782058/.tmp 2023-07-16 18:15:59,773 DEBUG [PEWorker-5] util.FSTableDescriptors(570): Wrote into hdfs://localhost:39679/user/jenkins/test-data/46b660d4-bb55-7072-2fb4-691f81782058/.tmp/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 2023-07-16 18:15:59,775 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(7675): creating {ENCODED => 5e547c590a8a20119ab4e8cece71317b, NAME => 'hbase:namespace,,1689531359735.5e547c590a8a20119ab4e8cece71317b.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='hbase:namespace', {NAME => 'info', BLOOMFILTER => 'ROW', IN_MEMORY => 'true', VERSIONS => '10', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:39679/user/jenkins/test-data/46b660d4-bb55-7072-2fb4-691f81782058/.tmp 2023-07-16 18:15:59,783 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689531359722.c09408583a4b3a7e05655ca4c086b84f.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 18:15:59,783 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1604): Closing c09408583a4b3a7e05655ca4c086b84f, disabling compactions & flushes 2023-07-16 18:15:59,783 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689531359722.c09408583a4b3a7e05655ca4c086b84f. 2023-07-16 18:15:59,783 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689531359722.c09408583a4b3a7e05655ca4c086b84f. 2023-07-16 18:15:59,783 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689531359722.c09408583a4b3a7e05655ca4c086b84f. after waiting 0 ms 2023-07-16 18:15:59,783 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689531359722.c09408583a4b3a7e05655ca4c086b84f. 2023-07-16 18:15:59,783 INFO [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689531359722.c09408583a4b3a7e05655ca4c086b84f. 2023-07-16 18:15:59,783 DEBUG [RegionOpenAndInit-hbase:rsgroup-pool-0] regionserver.HRegion(1558): Region close journal for c09408583a4b3a7e05655ca4c086b84f: 2023-07-16 18:15:59,785 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ADD_TO_META 2023-07-16 18:15:59,786 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:rsgroup,,1689531359722.c09408583a4b3a7e05655ca4c086b84f.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689531359786"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689531359786"}]},"ts":"1689531359786"} 2023-07-16 18:15:59,793 INFO [PEWorker-4] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-16 18:15:59,794 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-16 18:15:59,794 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689531359794"}]},"ts":"1689531359794"} 2023-07-16 18:15:59,799 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689531359735.5e547c590a8a20119ab4e8cece71317b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 18:15:59,799 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLING in hbase:meta 2023-07-16 18:15:59,799 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1604): Closing 5e547c590a8a20119ab4e8cece71317b, disabling compactions & flushes 2023-07-16 18:15:59,799 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689531359735.5e547c590a8a20119ab4e8cece71317b. 2023-07-16 18:15:59,799 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689531359735.5e547c590a8a20119ab4e8cece71317b. 2023-07-16 18:15:59,799 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689531359735.5e547c590a8a20119ab4e8cece71317b. after waiting 0 ms 2023-07-16 18:15:59,799 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689531359735.5e547c590a8a20119ab4e8cece71317b. 2023-07-16 18:15:59,799 INFO [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689531359735.5e547c590a8a20119ab4e8cece71317b. 2023-07-16 18:15:59,799 DEBUG [RegionOpenAndInit-hbase:namespace-pool-0] regionserver.HRegion(1558): Region close journal for 5e547c590a8a20119ab4e8cece71317b: 2023-07-16 18:15:59,801 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ADD_TO_META 2023-07-16 18:15:59,803 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"hbase:namespace,,1689531359735.5e547c590a8a20119ab4e8cece71317b.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689531359803"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689531359803"}]},"ts":"1689531359803"} 2023-07-16 18:15:59,803 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-16 18:15:59,804 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-16 18:15:59,804 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-16 18:15:59,804 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-16 18:15:59,804 DEBUG [PEWorker-4] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-16 18:15:59,804 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=6, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=c09408583a4b3a7e05655ca4c086b84f, ASSIGN}] 2023-07-16 18:15:59,805 INFO [PEWorker-5] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-16 18:15:59,806 INFO [PEWorker-2] procedure.MasterProcedureScheduler(727): Took xlock for pid=6, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:rsgroup, region=c09408583a4b3a7e05655ca4c086b84f, ASSIGN 2023-07-16 18:15:59,807 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-16 18:15:59,807 INFO [PEWorker-2] assignment.TransitRegionStateProcedure(193): Starting pid=6, ppid=4, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:rsgroup, region=c09408583a4b3a7e05655ca4c086b84f, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,35551,1689531358817; forceNewPlan=false, retain=false 2023-07-16 18:15:59,807 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689531359807"}]},"ts":"1689531359807"} 2023-07-16 18:15:59,808 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLING in hbase:meta 2023-07-16 18:15:59,811 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-16 18:15:59,811 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-16 18:15:59,811 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-16 18:15:59,811 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-16 18:15:59,811 DEBUG [PEWorker-5] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-16 18:15:59,811 INFO [PEWorker-5] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=7, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=5e547c590a8a20119ab4e8cece71317b, ASSIGN}] 2023-07-16 18:15:59,812 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=7, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=hbase:namespace, region=5e547c590a8a20119ab4e8cece71317b, ASSIGN 2023-07-16 18:15:59,813 INFO [PEWorker-1] assignment.TransitRegionStateProcedure(193): Starting pid=7, ppid=5, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=hbase:namespace, region=5e547c590a8a20119ab4e8cece71317b, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,43051,1689531358519; forceNewPlan=false, retain=false 2023-07-16 18:15:59,813 INFO [jenkins-hbase4:32899] balancer.BaseLoadBalancer(1545): Reassigned 2 regions. 2 retained the pre-restart assignment. 2023-07-16 18:15:59,815 INFO [PEWorker-3] assignment.RegionStateStore(219): pid=6 updating hbase:meta row=c09408583a4b3a7e05655ca4c086b84f, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,35551,1689531358817 2023-07-16 18:15:59,815 DEBUG [PEWorker-3] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:rsgroup,,1689531359722.c09408583a4b3a7e05655ca4c086b84f.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689531359815"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689531359815"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689531359815"}]},"ts":"1689531359815"} 2023-07-16 18:15:59,815 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=5e547c590a8a20119ab4e8cece71317b, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,43051,1689531358519 2023-07-16 18:15:59,815 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"hbase:namespace,,1689531359735.5e547c590a8a20119ab4e8cece71317b.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689531359815"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689531359815"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689531359815"}]},"ts":"1689531359815"} 2023-07-16 18:15:59,817 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=8, ppid=6, state=RUNNABLE; OpenRegionProcedure c09408583a4b3a7e05655ca4c086b84f, server=jenkins-hbase4.apache.org,35551,1689531358817}] 2023-07-16 18:15:59,818 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=9, ppid=7, state=RUNNABLE; OpenRegionProcedure 5e547c590a8a20119ab4e8cece71317b, server=jenkins-hbase4.apache.org,43051,1689531358519}] 2023-07-16 18:15:59,971 DEBUG [RSProcedureDispatcher-pool-1] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,35551,1689531358817 2023-07-16 18:15:59,971 DEBUG [RSProcedureDispatcher-pool-2] master.ServerManager(712): New admin connection to jenkins-hbase4.apache.org,43051,1689531358519 2023-07-16 18:15:59,971 DEBUG [RSProcedureDispatcher-pool-1] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-16 18:15:59,971 DEBUG [RSProcedureDispatcher-pool-2] ipc.RpcConnection(124): Using SIMPLE authentication for service=AdminService, sasl=false 2023-07-16 18:15:59,972 INFO [RS-EventLoopGroup-15-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:59280, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-16 18:15:59,972 INFO [RS-EventLoopGroup-13-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:46458, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=AdminService 2023-07-16 18:15:59,976 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:namespace,,1689531359735.5e547c590a8a20119ab4e8cece71317b. 2023-07-16 18:15:59,976 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open hbase:rsgroup,,1689531359722.c09408583a4b3a7e05655ca4c086b84f. 2023-07-16 18:15:59,976 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 5e547c590a8a20119ab4e8cece71317b, NAME => 'hbase:namespace,,1689531359735.5e547c590a8a20119ab4e8cece71317b.', STARTKEY => '', ENDKEY => ''} 2023-07-16 18:15:59,976 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => c09408583a4b3a7e05655ca4c086b84f, NAME => 'hbase:rsgroup,,1689531359722.c09408583a4b3a7e05655ca4c086b84f.', STARTKEY => '', ENDKEY => ''} 2023-07-16 18:15:59,976 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table namespace 5e547c590a8a20119ab4e8cece71317b 2023-07-16 18:15:59,976 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(215): Loading coprocessor class org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint with path null and priority 536870911 2023-07-16 18:15:59,976 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:namespace,,1689531359735.5e547c590a8a20119ab4e8cece71317b.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 18:15:59,976 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(8550): Registered coprocessor service: region=hbase:rsgroup,,1689531359722.c09408583a4b3a7e05655ca4c086b84f. service=MultiRowMutationService 2023-07-16 18:15:59,976 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 5e547c590a8a20119ab4e8cece71317b 2023-07-16 18:15:59,976 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 5e547c590a8a20119ab4e8cece71317b 2023-07-16 18:15:59,976 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.RegionCoprocessorHost(393): Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:rsgroup successfully. 2023-07-16 18:15:59,976 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table rsgroup c09408583a4b3a7e05655ca4c086b84f 2023-07-16 18:15:59,977 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated hbase:rsgroup,,1689531359722.c09408583a4b3a7e05655ca4c086b84f.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 18:15:59,977 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for c09408583a4b3a7e05655ca4c086b84f 2023-07-16 18:15:59,977 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for c09408583a4b3a7e05655ca4c086b84f 2023-07-16 18:15:59,978 INFO [StoreOpener-5e547c590a8a20119ab4e8cece71317b-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family info of region 5e547c590a8a20119ab4e8cece71317b 2023-07-16 18:15:59,978 INFO [StoreOpener-c09408583a4b3a7e05655ca4c086b84f-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family m of region c09408583a4b3a7e05655ca4c086b84f 2023-07-16 18:15:59,979 DEBUG [StoreOpener-5e547c590a8a20119ab4e8cece71317b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39679/user/jenkins/test-data/46b660d4-bb55-7072-2fb4-691f81782058/data/hbase/namespace/5e547c590a8a20119ab4e8cece71317b/info 2023-07-16 18:15:59,979 DEBUG [StoreOpener-c09408583a4b3a7e05655ca4c086b84f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39679/user/jenkins/test-data/46b660d4-bb55-7072-2fb4-691f81782058/data/hbase/rsgroup/c09408583a4b3a7e05655ca4c086b84f/m 2023-07-16 18:15:59,979 DEBUG [StoreOpener-c09408583a4b3a7e05655ca4c086b84f-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39679/user/jenkins/test-data/46b660d4-bb55-7072-2fb4-691f81782058/data/hbase/rsgroup/c09408583a4b3a7e05655ca4c086b84f/m 2023-07-16 18:15:59,979 DEBUG [StoreOpener-5e547c590a8a20119ab4e8cece71317b-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39679/user/jenkins/test-data/46b660d4-bb55-7072-2fb4-691f81782058/data/hbase/namespace/5e547c590a8a20119ab4e8cece71317b/info 2023-07-16 18:15:59,980 INFO [StoreOpener-5e547c590a8a20119ab4e8cece71317b-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 5e547c590a8a20119ab4e8cece71317b columnFamilyName info 2023-07-16 18:15:59,980 INFO [StoreOpener-c09408583a4b3a7e05655ca4c086b84f-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region c09408583a4b3a7e05655ca4c086b84f columnFamilyName m 2023-07-16 18:15:59,980 INFO [StoreOpener-5e547c590a8a20119ab4e8cece71317b-1] regionserver.HStore(310): Store=5e547c590a8a20119ab4e8cece71317b/info, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 18:15:59,980 INFO [StoreOpener-c09408583a4b3a7e05655ca4c086b84f-1] regionserver.HStore(310): Store=c09408583a4b3a7e05655ca4c086b84f/m, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 18:15:59,981 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:39679/user/jenkins/test-data/46b660d4-bb55-7072-2fb4-691f81782058/data/hbase/rsgroup/c09408583a4b3a7e05655ca4c086b84f 2023-07-16 18:15:59,981 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:39679/user/jenkins/test-data/46b660d4-bb55-7072-2fb4-691f81782058/data/hbase/namespace/5e547c590a8a20119ab4e8cece71317b 2023-07-16 18:15:59,981 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:39679/user/jenkins/test-data/46b660d4-bb55-7072-2fb4-691f81782058/data/hbase/rsgroup/c09408583a4b3a7e05655ca4c086b84f 2023-07-16 18:15:59,981 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:39679/user/jenkins/test-data/46b660d4-bb55-7072-2fb4-691f81782058/data/hbase/namespace/5e547c590a8a20119ab4e8cece71317b 2023-07-16 18:15:59,984 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 5e547c590a8a20119ab4e8cece71317b 2023-07-16 18:15:59,984 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for c09408583a4b3a7e05655ca4c086b84f 2023-07-16 18:15:59,987 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:39679/user/jenkins/test-data/46b660d4-bb55-7072-2fb4-691f81782058/data/hbase/namespace/5e547c590a8a20119ab4e8cece71317b/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-16 18:15:59,988 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 5e547c590a8a20119ab4e8cece71317b; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=10649404640, jitterRate=-0.008196905255317688}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 18:15:59,988 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:39679/user/jenkins/test-data/46b660d4-bb55-7072-2fb4-691f81782058/data/hbase/rsgroup/c09408583a4b3a7e05655ca4c086b84f/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-16 18:15:59,988 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 5e547c590a8a20119ab4e8cece71317b: 2023-07-16 18:15:59,989 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened c09408583a4b3a7e05655ca4c086b84f; next sequenceid=2; org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy@3cb0e666, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 18:15:59,989 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for c09408583a4b3a7e05655ca4c086b84f: 2023-07-16 18:15:59,989 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:namespace,,1689531359735.5e547c590a8a20119ab4e8cece71317b., pid=9, masterSystemTime=1689531359971 2023-07-16 18:15:59,992 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for hbase:rsgroup,,1689531359722.c09408583a4b3a7e05655ca4c086b84f., pid=8, masterSystemTime=1689531359971 2023-07-16 18:15:59,994 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:namespace,,1689531359735.5e547c590a8a20119ab4e8cece71317b. 2023-07-16 18:15:59,995 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:namespace,,1689531359735.5e547c590a8a20119ab4e8cece71317b. 2023-07-16 18:15:59,995 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=7 updating hbase:meta row=5e547c590a8a20119ab4e8cece71317b, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,43051,1689531358519 2023-07-16 18:15:59,995 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:namespace,,1689531359735.5e547c590a8a20119ab4e8cece71317b.","families":{"info":[{"qualifier":"regioninfo","vlen":41,"tag":[],"timestamp":"1689531359995"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689531359995"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689531359995"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689531359995"}]},"ts":"1689531359995"} 2023-07-16 18:15:59,995 DEBUG [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for hbase:rsgroup,,1689531359722.c09408583a4b3a7e05655ca4c086b84f. 2023-07-16 18:15:59,996 INFO [RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened hbase:rsgroup,,1689531359722.c09408583a4b3a7e05655ca4c086b84f. 2023-07-16 18:15:59,996 INFO [PEWorker-2] assignment.RegionStateStore(219): pid=6 updating hbase:meta row=c09408583a4b3a7e05655ca4c086b84f, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,35551,1689531358817 2023-07-16 18:15:59,996 DEBUG [PEWorker-2] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"hbase:rsgroup,,1689531359722.c09408583a4b3a7e05655ca4c086b84f.","families":{"info":[{"qualifier":"regioninfo","vlen":39,"tag":[],"timestamp":"1689531359996"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689531359996"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689531359996"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689531359996"}]},"ts":"1689531359996"} 2023-07-16 18:15:59,999 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=9, resume processing ppid=7 2023-07-16 18:15:59,999 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=9, ppid=7, state=SUCCESS; OpenRegionProcedure 5e547c590a8a20119ab4e8cece71317b, server=jenkins-hbase4.apache.org,43051,1689531358519 in 180 msec 2023-07-16 18:16:00,000 INFO [PEWorker-2] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=8, resume processing ppid=6 2023-07-16 18:16:00,000 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=8, ppid=6, state=SUCCESS; OpenRegionProcedure c09408583a4b3a7e05655ca4c086b84f, server=jenkins-hbase4.apache.org,35551,1689531358817 in 180 msec 2023-07-16 18:16:00,001 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=7, resume processing ppid=5 2023-07-16 18:16:00,001 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=7, ppid=5, state=SUCCESS; TransitRegionStateProcedure table=hbase:namespace, region=5e547c590a8a20119ab4e8cece71317b, ASSIGN in 188 msec 2023-07-16 18:16:00,002 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=6, resume processing ppid=4 2023-07-16 18:16:00,002 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=6, ppid=4, state=SUCCESS; TransitRegionStateProcedure table=hbase:rsgroup, region=c09408583a4b3a7e05655ca4c086b84f, ASSIGN in 196 msec 2023-07-16 18:16:00,002 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-16 18:16:00,002 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:namespace","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689531360002"}]},"ts":"1689531360002"} 2023-07-16 18:16:00,002 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-16 18:16:00,002 DEBUG [PEWorker-4] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"hbase:rsgroup","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689531360002"}]},"ts":"1689531360002"} 2023-07-16 18:16:00,004 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=hbase:namespace, state=ENABLED in hbase:meta 2023-07-16 18:16:00,004 INFO [PEWorker-4] hbase.MetaTableAccessor(1635): Updated tableName=hbase:rsgroup, state=ENABLED in hbase:meta 2023-07-16 18:16:00,008 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=5, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:namespace execute state=CREATE_TABLE_POST_OPERATION 2023-07-16 18:16:00,009 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=4, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=hbase:rsgroup execute state=CREATE_TABLE_POST_OPERATION 2023-07-16 18:16:00,009 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=5, state=SUCCESS; CreateTableProcedure table=hbase:namespace in 273 msec 2023-07-16 18:16:00,010 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=4, state=SUCCESS; CreateTableProcedure table=hbase:rsgroup in 287 msec 2023-07-16 18:16:00,027 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,32899,1689531358331] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-16 18:16:00,028 INFO [RS-EventLoopGroup-15-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:59286, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-16 18:16:00,030 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,32899,1689531358331] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(840): RSGroup table=hbase:rsgroup is online, refreshing cached information 2023-07-16 18:16:00,030 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,32899,1689531358331] rsgroup.RSGroupInfoManagerImpl(534): Refreshing in Online mode. 2023-07-16 18:16:00,035 DEBUG [Listener at localhost/42859-EventThread] zookeeper.ZKWatcher(600): master:32899-0x1016f591f660000, quorum=127.0.0.1:54881, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-16 18:16:00,035 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,32899,1689531358331] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 18:16:00,037 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,32899,1689531358331] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-16 18:16:00,037 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKUtil(164): master:32899-0x1016f591f660000, quorum=127.0.0.1:54881, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/namespace 2023-07-16 18:16:00,040 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker-jenkins-hbase4.apache.org,32899,1689531358331] rsgroup.RSGroupInfoManagerImpl$RSGroupStartupWorker(823): GroupBasedLoadBalancer is now online 2023-07-16 18:16:00,040 DEBUG [Listener at localhost/42859-EventThread] zookeeper.ZKWatcher(600): master:32899-0x1016f591f660000, quorum=127.0.0.1:54881, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/namespace 2023-07-16 18:16:00,040 DEBUG [Listener at localhost/42859-EventThread] zookeeper.ZKWatcher(600): master:32899-0x1016f591f660000, quorum=127.0.0.1:54881, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-16 18:16:00,043 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-16 18:16:00,044 INFO [RS-EventLoopGroup-13-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:46468, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-16 18:16:00,046 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=10, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=default 2023-07-16 18:16:00,054 DEBUG [Listener at localhost/42859-EventThread] zookeeper.ZKWatcher(600): master:32899-0x1016f591f660000, quorum=127.0.0.1:54881, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-16 18:16:00,057 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=10, state=SUCCESS; CreateNamespaceProcedure, namespace=default in 9 msec 2023-07-16 18:16:00,068 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] procedure2.ProcedureExecutor(1029): Stored pid=11, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=hbase 2023-07-16 18:16:00,075 DEBUG [Listener at localhost/42859-EventThread] zookeeper.ZKWatcher(600): master:32899-0x1016f591f660000, quorum=127.0.0.1:54881, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-16 18:16:00,078 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=11, state=SUCCESS; CreateNamespaceProcedure, namespace=hbase in 9 msec 2023-07-16 18:16:00,082 DEBUG [Listener at localhost/42859-EventThread] zookeeper.ZKWatcher(600): master:32899-0x1016f591f660000, quorum=127.0.0.1:54881, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/default 2023-07-16 18:16:00,085 DEBUG [Listener at localhost/42859-EventThread] zookeeper.ZKWatcher(600): master:32899-0x1016f591f660000, quorum=127.0.0.1:54881, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/hbase 2023-07-16 18:16:00,085 INFO [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1083): Master has completed initialization 1.113sec 2023-07-16 18:16:00,086 INFO [master/jenkins-hbase4:0:becomeActiveMaster] quotas.MasterQuotaManager(97): Quota support disabled 2023-07-16 18:16:00,086 INFO [master/jenkins-hbase4:0:becomeActiveMaster] slowlog.SlowLogMasterService(57): Slow/Large requests logging to system table hbase:slowlog is disabled. Quitting. 2023-07-16 18:16:00,086 INFO [master/jenkins-hbase4:0:becomeActiveMaster] zookeeper.ZKWatcher(269): not a secure deployment, proceeding 2023-07-16 18:16:00,086 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,32899,1689531358331-ExpiredMobFileCleanerChore, period=86400, unit=SECONDS is enabled. 2023-07-16 18:16:00,086 INFO [master/jenkins-hbase4:0:becomeActiveMaster] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,32899,1689531358331-MobCompactionChore, period=604800, unit=SECONDS is enabled. 2023-07-16 18:16:00,086 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster] master.HMaster(1175): Balancer post startup initialization complete, took 0 seconds 2023-07-16 18:16:00,167 DEBUG [Listener at localhost/42859] zookeeper.ReadOnlyZKClient(139): Connect 0x5da743fd to 127.0.0.1:54881 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-16 18:16:00,173 DEBUG [Listener at localhost/42859] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@1fb70e0, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-16 18:16:00,175 DEBUG [hconnection-0x2a21632f-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-16 18:16:00,180 INFO [RS-EventLoopGroup-14-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:33056, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-16 18:16:00,182 INFO [Listener at localhost/42859] hbase.HBaseTestingUtility(1145): Minicluster is up; activeMaster=jenkins-hbase4.apache.org,32899,1689531358331 2023-07-16 18:16:00,182 INFO [Listener at localhost/42859] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 18:16:00,185 DEBUG [Listener at localhost/42859] ipc.RpcConnection(124): Using SIMPLE authentication for service=MasterService, sasl=false 2023-07-16 18:16:00,187 INFO [RS-EventLoopGroup-12-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:52678, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=MasterService 2023-07-16 18:16:00,190 DEBUG [Listener at localhost/42859-EventThread] zookeeper.ZKWatcher(600): master:32899-0x1016f591f660000, quorum=127.0.0.1:54881, baseZNode=/hbase Received ZooKeeper Event, type=NodeCreated, state=SyncConnected, path=/hbase/balancer 2023-07-16 18:16:00,190 DEBUG [Listener at localhost/42859-EventThread] zookeeper.ZKWatcher(600): master:32899-0x1016f591f660000, quorum=127.0.0.1:54881, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-16 18:16:00,191 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] master.MasterRpcServices(492): Client=jenkins//172.31.14.131 set balanceSwitch=false 2023-07-16 18:16:00,192 DEBUG [Listener at localhost/42859] zookeeper.ReadOnlyZKClient(139): Connect 0x723c7667 to 127.0.0.1:54881 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-16 18:16:00,198 DEBUG [Listener at localhost/42859] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@53254b14, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-16 18:16:00,199 INFO [Listener at localhost/42859] zookeeper.RecoverableZooKeeper(93): Process identifier=VerifyingRSGroupAdminClient connecting to ZooKeeper ensemble=127.0.0.1:54881 2023-07-16 18:16:00,202 DEBUG [Listener at localhost/42859-EventThread] zookeeper.ZKWatcher(600): VerifyingRSGroupAdminClient0x0, quorum=127.0.0.1:54881, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-16 18:16:00,203 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): VerifyingRSGroupAdminClient-0x1016f591f66000a connected 2023-07-16 18:16:00,209 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 18:16:00,210 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 18:16:00,212 INFO [Listener at localhost/42859] rsgroup.TestRSGroupsBase(152): Restoring servers: 1 2023-07-16 18:16:00,224 INFO [Listener at localhost/42859] client.ConnectionUtils(127): regionserver/jenkins-hbase4:0 server-side Connection retries=45 2023-07-16 18:16:00,224 INFO [Listener at localhost/42859] ipc.RpcExecutor(189): Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-16 18:16:00,224 INFO [Listener at localhost/42859] ipc.RpcExecutor(189): Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=30, handlerCount=3 2023-07-16 18:16:00,225 INFO [Listener at localhost/42859] ipc.RWQueueRpcExecutor(107): priority.RWQ.Fifo writeQueues=1 writeHandlers=1 readQueues=1 readHandlers=2 scanQueues=0 scanHandlers=0 2023-07-16 18:16:00,225 INFO [Listener at localhost/42859] ipc.RpcExecutor(189): Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=3 2023-07-16 18:16:00,225 INFO [Listener at localhost/42859] ipc.RpcExecutor(189): Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=30, handlerCount=1 2023-07-16 18:16:00,225 INFO [Listener at localhost/42859] ipc.RpcServerFactory(64): Creating org.apache.hadoop.hbase.ipc.NettyRpcServer hosting hbase.pb.ClientService, hbase.pb.AdminService 2023-07-16 18:16:00,225 INFO [Listener at localhost/42859] ipc.NettyRpcServer(120): Bind to /172.31.14.131:46003 2023-07-16 18:16:00,226 INFO [Listener at localhost/42859] hfile.BlockCacheFactory(142): Allocating BlockCache size=782.40 MB, blockSize=64 KB 2023-07-16 18:16:00,228 DEBUG [Listener at localhost/42859] mob.MobFileCache(120): MobFileCache enabled with cacheSize=1000, evictPeriods=3600sec, evictRemainRatio=0.5 2023-07-16 18:16:00,228 INFO [Listener at localhost/42859] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-16 18:16:00,229 INFO [Listener at localhost/42859] fs.HFileSystem(337): Added intercepting call to namenode#getBlockLocations so can do block reordering using class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks 2023-07-16 18:16:00,230 INFO [Listener at localhost/42859] zookeeper.RecoverableZooKeeper(93): Process identifier=regionserver:46003 connecting to ZooKeeper ensemble=127.0.0.1:54881 2023-07-16 18:16:00,234 DEBUG [Listener at localhost/42859-EventThread] zookeeper.ZKWatcher(600): regionserver:460030x0, quorum=127.0.0.1:54881, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null 2023-07-16 18:16:00,237 DEBUG [Listener at localhost/42859] zookeeper.ZKUtil(162): regionserver:460030x0, quorum=127.0.0.1:54881, baseZNode=/hbase Set watcher on existing znode=/hbase/master 2023-07-16 18:16:00,238 DEBUG [Listener at localhost/42859] zookeeper.ZKUtil(162): regionserver:460030x0, quorum=127.0.0.1:54881, baseZNode=/hbase Set watcher on existing znode=/hbase/running 2023-07-16 18:16:00,239 DEBUG [zk-event-processor-pool-0] zookeeper.ZKWatcher(625): regionserver:46003-0x1016f591f66000b connected 2023-07-16 18:16:00,239 DEBUG [Listener at localhost/42859] zookeeper.ZKUtil(164): regionserver:46003-0x1016f591f66000b, quorum=127.0.0.1:54881, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/acl 2023-07-16 18:16:00,240 DEBUG [Listener at localhost/42859] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=default.FPBQ.Fifo, numCallQueues=1, port=46003 2023-07-16 18:16:00,240 DEBUG [Listener at localhost/42859] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=priority.RWQ.Fifo.write, numCallQueues=1, port=46003 2023-07-16 18:16:00,240 DEBUG [Listener at localhost/42859] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=priority.RWQ.Fifo.read, numCallQueues=1, port=46003 2023-07-16 18:16:00,242 DEBUG [Listener at localhost/42859] ipc.RpcExecutor(311): Started handlerCount=3 with threadPrefix=replication.FPBQ.Fifo, numCallQueues=1, port=46003 2023-07-16 18:16:00,243 DEBUG [Listener at localhost/42859] ipc.RpcExecutor(311): Started handlerCount=1 with threadPrefix=metaPriority.FPBQ.Fifo, numCallQueues=1, port=46003 2023-07-16 18:16:00,244 INFO [Listener at localhost/42859] http.HttpServer(900): Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter) 2023-07-16 18:16:00,244 INFO [Listener at localhost/42859] http.HttpServer(900): Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter) 2023-07-16 18:16:00,244 INFO [Listener at localhost/42859] http.HttpServer(900): Added global filter 'securityheaders' (class=org.apache.hadoop.hbase.http.SecurityHeadersFilter) 2023-07-16 18:16:00,245 INFO [Listener at localhost/42859] http.HttpServer(879): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver 2023-07-16 18:16:00,245 INFO [Listener at localhost/42859] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2023-07-16 18:16:00,245 INFO [Listener at localhost/42859] http.HttpServer(886): Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2023-07-16 18:16:00,245 INFO [Listener at localhost/42859] http.HttpServer(783): ASYNC_PROFILER_HOME environment variable and async.profiler.home system property not specified. Disabling /prof endpoint. 2023-07-16 18:16:00,246 INFO [Listener at localhost/42859] http.HttpServer(1146): Jetty bound to port 33565 2023-07-16 18:16:00,246 INFO [Listener at localhost/42859] server.Server(375): jetty-9.4.50.v20221201; built: 2022-12-01T22:07:03.915Z; git: da9a0b30691a45daf90a9f17b5defa2f1434f882; jvm 1.8.0_362-b09 2023-07-16 18:16:00,248 INFO [Listener at localhost/42859] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 18:16:00,248 INFO [Listener at localhost/42859] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@461748f3{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5f6dd96a-5549-784d-879e-773f11a8ed4c/hadoop.log.dir/,AVAILABLE} 2023-07-16 18:16:00,248 INFO [Listener at localhost/42859] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 18:16:00,248 INFO [Listener at localhost/42859] handler.ContextHandler(921): Started o.a.h.t.o.e.j.s.ServletContextHandler@461e4fbf{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,AVAILABLE} 2023-07-16 18:16:00,362 INFO [Listener at localhost/42859] webapp.StandardDescriptorProcessor(277): NO JSP Support for /, did not find org.apache.hbase.thirdparty.org.eclipse.jetty.jsp.JettyJspServlet 2023-07-16 18:16:00,363 INFO [Listener at localhost/42859] session.DefaultSessionIdManager(334): DefaultSessionIdManager workerName=node0 2023-07-16 18:16:00,363 INFO [Listener at localhost/42859] session.DefaultSessionIdManager(339): No SessionScavenger set, using defaults 2023-07-16 18:16:00,363 INFO [Listener at localhost/42859] session.HouseKeeper(132): node0 Scavenging every 600000ms 2023-07-16 18:16:00,364 INFO [Listener at localhost/42859] http.SecurityHeadersFilter(48): Added security headers filter 2023-07-16 18:16:00,365 INFO [Listener at localhost/42859] handler.ContextHandler(921): Started o.a.h.t.o.e.j.w.WebAppContext@7d5d5f98{regionserver,/,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5f6dd96a-5549-784d-879e-773f11a8ed4c/java.io.tmpdir/jetty-0_0_0_0-33565-hbase-server-2_4_18-SNAPSHOT_jar-_-any-2656657334188945965/webapp/,AVAILABLE}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-16 18:16:00,367 INFO [Listener at localhost/42859] server.AbstractConnector(333): Started ServerConnector@c24108f{HTTP/1.1, (http/1.1)}{0.0.0.0:33565} 2023-07-16 18:16:00,367 INFO [Listener at localhost/42859] server.Server(415): Started @45147ms 2023-07-16 18:16:00,369 INFO [RS:3;jenkins-hbase4:46003] regionserver.HRegionServer(951): ClusterId : 551150bd-315d-4320-93d1-990373dd96e3 2023-07-16 18:16:00,372 DEBUG [RS:3;jenkins-hbase4:46003] procedure.RegionServerProcedureManagerHost(43): Procedure flush-table-proc initializing 2023-07-16 18:16:00,374 DEBUG [RS:3;jenkins-hbase4:46003] procedure.RegionServerProcedureManagerHost(45): Procedure flush-table-proc initialized 2023-07-16 18:16:00,374 DEBUG [RS:3;jenkins-hbase4:46003] procedure.RegionServerProcedureManagerHost(43): Procedure online-snapshot initializing 2023-07-16 18:16:00,376 DEBUG [RS:3;jenkins-hbase4:46003] procedure.RegionServerProcedureManagerHost(45): Procedure online-snapshot initialized 2023-07-16 18:16:00,377 DEBUG [RS:3;jenkins-hbase4:46003] zookeeper.ReadOnlyZKClient(139): Connect 0x632a472d to 127.0.0.1:54881 with session timeout=90000ms, retries 30, retry interval 1000ms, keepAlive=60000ms 2023-07-16 18:16:00,381 DEBUG [RS:3;jenkins-hbase4:46003] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@6dedb538, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null 2023-07-16 18:16:00,381 DEBUG [RS:3;jenkins-hbase4:46003] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@69d22c5d, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-16 18:16:00,389 DEBUG [RS:3;jenkins-hbase4:46003] regionserver.ShutdownHook(81): Installed shutdown hook thread: Shutdownhook:RS:3;jenkins-hbase4:46003 2023-07-16 18:16:00,389 INFO [RS:3;jenkins-hbase4:46003] regionserver.RegionServerCoprocessorHost(66): System coprocessor loading is enabled 2023-07-16 18:16:00,389 INFO [RS:3;jenkins-hbase4:46003] regionserver.RegionServerCoprocessorHost(67): Table coprocessor loading is enabled 2023-07-16 18:16:00,389 DEBUG [RS:3;jenkins-hbase4:46003] regionserver.HRegionServer(1022): About to register with Master. 2023-07-16 18:16:00,390 INFO [RS:3;jenkins-hbase4:46003] regionserver.HRegionServer(2811): reportForDuty to master=jenkins-hbase4.apache.org,32899,1689531358331 with isa=jenkins-hbase4.apache.org/172.31.14.131:46003, startcode=1689531360224 2023-07-16 18:16:00,390 DEBUG [RS:3;jenkins-hbase4:46003] ipc.RpcConnection(124): Using SIMPLE authentication for service=RegionServerStatusService, sasl=false 2023-07-16 18:16:00,392 INFO [RS-EventLoopGroup-12-3] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:56955, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins.hfs.10 (auth:SIMPLE), service=RegionServerStatusService 2023-07-16 18:16:00,392 INFO [RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=32899] master.ServerManager(394): Registering regionserver=jenkins-hbase4.apache.org,46003,1689531360224 2023-07-16 18:16:00,392 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,32899,1689531358331] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(787): Updating default servers. 2023-07-16 18:16:00,393 DEBUG [RS:3;jenkins-hbase4:46003] regionserver.HRegionServer(1595): Config from master: hbase.rootdir=hdfs://localhost:39679/user/jenkins/test-data/46b660d4-bb55-7072-2fb4-691f81782058 2023-07-16 18:16:00,393 DEBUG [RS:3;jenkins-hbase4:46003] regionserver.HRegionServer(1595): Config from master: fs.defaultFS=hdfs://localhost:39679 2023-07-16 18:16:00,393 DEBUG [RS:3;jenkins-hbase4:46003] regionserver.HRegionServer(1595): Config from master: hbase.master.info.port=39933 2023-07-16 18:16:00,397 DEBUG [Listener at localhost/42859-EventThread] zookeeper.ZKWatcher(600): regionserver:35551-0x1016f591f660003, quorum=127.0.0.1:54881, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-16 18:16:00,397 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,32899,1689531358331] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 18:16:00,397 DEBUG [Listener at localhost/42859-EventThread] zookeeper.ZKWatcher(600): regionserver:37401-0x1016f591f660002, quorum=127.0.0.1:54881, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-16 18:16:00,397 DEBUG [Listener at localhost/42859-EventThread] zookeeper.ZKWatcher(600): regionserver:43051-0x1016f591f660001, quorum=127.0.0.1:54881, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-16 18:16:00,397 DEBUG [Listener at localhost/42859-EventThread] zookeeper.ZKWatcher(600): master:32899-0x1016f591f660000, quorum=127.0.0.1:54881, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-16 18:16:00,398 DEBUG [RS:3;jenkins-hbase4:46003] zookeeper.ZKUtil(162): regionserver:46003-0x1016f591f66000b, quorum=127.0.0.1:54881, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46003,1689531360224 2023-07-16 18:16:00,398 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:35551-0x1016f591f660003, quorum=127.0.0.1:54881, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35551,1689531358817 2023-07-16 18:16:00,398 WARN [RS:3;jenkins-hbase4:46003] hbase.ZNodeClearer(69): Environment variable HBASE_ZNODE_FILE not set; znodes will not be cleared on crash by start scripts (Longer MTTR!) 2023-07-16 18:16:00,398 INFO [RS:3;jenkins-hbase4:46003] wal.WALFactory(158): Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider 2023-07-16 18:16:00,398 DEBUG [RS:3;jenkins-hbase4:46003] regionserver.HRegionServer(1948): logDir=hdfs://localhost:39679/user/jenkins/test-data/46b660d4-bb55-7072-2fb4-691f81782058/WALs/jenkins-hbase4.apache.org,46003,1689531360224 2023-07-16 18:16:00,402 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:43051-0x1016f591f660001, quorum=127.0.0.1:54881, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35551,1689531358817 2023-07-16 18:16:00,402 DEBUG [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,32899,1689531358331] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 2 2023-07-16 18:16:00,402 INFO [RegionServerTracker-0] master.RegionServerTracker(190): RegionServer ephemeral node created, adding [jenkins-hbase4.apache.org,46003,1689531360224] 2023-07-16 18:16:00,402 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:35551-0x1016f591f660003, quorum=127.0.0.1:54881, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43051,1689531358519 2023-07-16 18:16:00,402 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:37401-0x1016f591f660002, quorum=127.0.0.1:54881, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35551,1689531358817 2023-07-16 18:16:00,403 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:43051-0x1016f591f660001, quorum=127.0.0.1:54881, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43051,1689531358519 2023-07-16 18:16:00,403 INFO [org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,32899,1689531358331] rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread(792): Updated with servers: 4 2023-07-16 18:16:00,404 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:35551-0x1016f591f660003, quorum=127.0.0.1:54881, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46003,1689531360224 2023-07-16 18:16:00,404 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:37401-0x1016f591f660002, quorum=127.0.0.1:54881, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43051,1689531358519 2023-07-16 18:16:00,404 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:43051-0x1016f591f660001, quorum=127.0.0.1:54881, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46003,1689531360224 2023-07-16 18:16:00,405 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:37401-0x1016f591f660002, quorum=127.0.0.1:54881, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46003,1689531360224 2023-07-16 18:16:00,405 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:35551-0x1016f591f660003, quorum=127.0.0.1:54881, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37401,1689531358669 2023-07-16 18:16:00,405 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:43051-0x1016f591f660001, quorum=127.0.0.1:54881, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37401,1689531358669 2023-07-16 18:16:00,405 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(162): regionserver:37401-0x1016f591f660002, quorum=127.0.0.1:54881, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37401,1689531358669 2023-07-16 18:16:00,405 DEBUG [RS:3;jenkins-hbase4:46003] zookeeper.ZKUtil(162): regionserver:46003-0x1016f591f66000b, quorum=127.0.0.1:54881, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,35551,1689531358817 2023-07-16 18:16:00,406 DEBUG [RS:3;jenkins-hbase4:46003] zookeeper.ZKUtil(162): regionserver:46003-0x1016f591f66000b, quorum=127.0.0.1:54881, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,43051,1689531358519 2023-07-16 18:16:00,406 DEBUG [RS:3;jenkins-hbase4:46003] zookeeper.ZKUtil(162): regionserver:46003-0x1016f591f66000b, quorum=127.0.0.1:54881, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,46003,1689531360224 2023-07-16 18:16:00,406 DEBUG [RS:3;jenkins-hbase4:46003] zookeeper.ZKUtil(162): regionserver:46003-0x1016f591f66000b, quorum=127.0.0.1:54881, baseZNode=/hbase Set watcher on existing znode=/hbase/rs/jenkins-hbase4.apache.org,37401,1689531358669 2023-07-16 18:16:00,407 DEBUG [RS:3;jenkins-hbase4:46003] regionserver.Replication(139): Replication stats-in-log period=300 seconds 2023-07-16 18:16:00,407 INFO [RS:3;jenkins-hbase4:46003] regionserver.MetricsRegionServerWrapperImpl(154): Computing regionserver metrics every 5000 milliseconds 2023-07-16 18:16:00,408 INFO [RS:3;jenkins-hbase4:46003] regionserver.MemStoreFlusher(125): globalMemStoreLimit=782.4 M, globalMemStoreLimitLowMark=743.3 M, Offheap=false 2023-07-16 18:16:00,408 INFO [RS:3;jenkins-hbase4:46003] throttle.PressureAwareCompactionThroughputController(131): Compaction throughput configurations, higher bound: 100.00 MB/second, lower bound 50.00 MB/second, off peak: unlimited, tuning period: 60000 ms 2023-07-16 18:16:00,408 INFO [RS:3;jenkins-hbase4:46003] hbase.ChoreService(166): Chore ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS is enabled. 2023-07-16 18:16:00,408 INFO [RS:3;jenkins-hbase4:46003] regionserver.HRegionServer$CompactionChecker(1837): CompactionChecker runs every PT1S 2023-07-16 18:16:00,410 INFO [RS:3;jenkins-hbase4:46003] hbase.ChoreService(166): Chore ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS is enabled. 2023-07-16 18:16:00,410 DEBUG [RS:3;jenkins-hbase4:46003] executor.ExecutorService(93): Starting executor service name=RS_OPEN_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 18:16:00,411 DEBUG [RS:3;jenkins-hbase4:46003] executor.ExecutorService(93): Starting executor service name=RS_OPEN_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 18:16:00,411 DEBUG [RS:3;jenkins-hbase4:46003] executor.ExecutorService(93): Starting executor service name=RS_OPEN_PRIORITY_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 18:16:00,411 DEBUG [RS:3;jenkins-hbase4:46003] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_REGION-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 18:16:00,411 DEBUG [RS:3;jenkins-hbase4:46003] executor.ExecutorService(93): Starting executor service name=RS_CLOSE_META-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 18:16:00,411 DEBUG [RS:3;jenkins-hbase4:46003] executor.ExecutorService(93): Starting executor service name=RS_LOG_REPLAY_OPS-regionserver/jenkins-hbase4:0, corePoolSize=2, maxPoolSize=2 2023-07-16 18:16:00,411 DEBUG [RS:3;jenkins-hbase4:46003] executor.ExecutorService(93): Starting executor service name=RS_COMPACTED_FILES_DISCHARGER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 18:16:00,411 DEBUG [RS:3;jenkins-hbase4:46003] executor.ExecutorService(93): Starting executor service name=RS_REGION_REPLICA_FLUSH_OPS-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 18:16:00,411 DEBUG [RS:3;jenkins-hbase4:46003] executor.ExecutorService(93): Starting executor service name=RS_REFRESH_PEER-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 18:16:00,411 DEBUG [RS:3;jenkins-hbase4:46003] executor.ExecutorService(93): Starting executor service name=RS_SWITCH_RPC_THROTTLE-regionserver/jenkins-hbase4:0, corePoolSize=1, maxPoolSize=1 2023-07-16 18:16:00,412 INFO [RS:3;jenkins-hbase4:46003] hbase.ChoreService(166): Chore ScheduledChore name=CompactionChecker, period=1000, unit=MILLISECONDS is enabled. 2023-07-16 18:16:00,412 INFO [RS:3;jenkins-hbase4:46003] hbase.ChoreService(166): Chore ScheduledChore name=MemstoreFlusherChore, period=1000, unit=MILLISECONDS is enabled. 2023-07-16 18:16:00,412 INFO [RS:3;jenkins-hbase4:46003] hbase.ChoreService(166): Chore ScheduledChore name=nonceCleaner, period=360000, unit=MILLISECONDS is enabled. 2023-07-16 18:16:00,425 INFO [RS:3;jenkins-hbase4:46003] regionserver.HeapMemoryManager(209): Starting, tuneOn=false 2023-07-16 18:16:00,425 INFO [RS:3;jenkins-hbase4:46003] hbase.ChoreService(166): Chore ScheduledChore name=jenkins-hbase4.apache.org,46003,1689531360224-HeapMemoryTunerChore, period=60000, unit=MILLISECONDS is enabled. 2023-07-16 18:16:00,436 INFO [RS:3;jenkins-hbase4:46003] regionserver.Replication(203): jenkins-hbase4.apache.org,46003,1689531360224 started 2023-07-16 18:16:00,436 INFO [RS:3;jenkins-hbase4:46003] regionserver.HRegionServer(1637): Serving as jenkins-hbase4.apache.org,46003,1689531360224, RpcServer on jenkins-hbase4.apache.org/172.31.14.131:46003, sessionid=0x1016f591f66000b 2023-07-16 18:16:00,436 DEBUG [RS:3;jenkins-hbase4:46003] procedure.RegionServerProcedureManagerHost(51): Procedure flush-table-proc starting 2023-07-16 18:16:00,436 DEBUG [RS:3;jenkins-hbase4:46003] flush.RegionServerFlushTableProcedureManager(106): Start region server flush procedure manager jenkins-hbase4.apache.org,46003,1689531360224 2023-07-16 18:16:00,436 DEBUG [RS:3;jenkins-hbase4:46003] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,46003,1689531360224' 2023-07-16 18:16:00,436 DEBUG [RS:3;jenkins-hbase4:46003] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/flush-table-proc/abort' 2023-07-16 18:16:00,437 DEBUG [RS:3;jenkins-hbase4:46003] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/flush-table-proc/acquired' 2023-07-16 18:16:00,437 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-16 18:16:00,437 DEBUG [RS:3;jenkins-hbase4:46003] procedure.RegionServerProcedureManagerHost(53): Procedure flush-table-proc started 2023-07-16 18:16:00,437 DEBUG [RS:3;jenkins-hbase4:46003] procedure.RegionServerProcedureManagerHost(51): Procedure online-snapshot starting 2023-07-16 18:16:00,437 DEBUG [RS:3;jenkins-hbase4:46003] snapshot.RegionServerSnapshotManager(126): Start Snapshot Manager jenkins-hbase4.apache.org,46003,1689531360224 2023-07-16 18:16:00,437 DEBUG [RS:3;jenkins-hbase4:46003] procedure.ZKProcedureMemberRpcs(357): Starting procedure member 'jenkins-hbase4.apache.org,46003,1689531360224' 2023-07-16 18:16:00,437 DEBUG [RS:3;jenkins-hbase4:46003] procedure.ZKProcedureMemberRpcs(134): Checking for aborted procedures on node: '/hbase/online-snapshot/abort' 2023-07-16 18:16:00,438 DEBUG [RS:3;jenkins-hbase4:46003] procedure.ZKProcedureMemberRpcs(154): Looking for new procedures under znode:'/hbase/online-snapshot/acquired' 2023-07-16 18:16:00,438 DEBUG [RS:3;jenkins-hbase4:46003] procedure.RegionServerProcedureManagerHost(53): Procedure online-snapshot started 2023-07-16 18:16:00,438 INFO [RS:3;jenkins-hbase4:46003] quotas.RegionServerRpcQuotaManager(63): Quota support disabled 2023-07-16 18:16:00,438 INFO [RS:3;jenkins-hbase4:46003] quotas.RegionServerSpaceQuotaManager(80): Quota support disabled, not starting space quota manager. 2023-07-16 18:16:00,439 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 18:16:00,440 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 18:16:00,441 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-16 18:16:00,442 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-16 18:16:00,444 DEBUG [hconnection-0x37ae9b1f-metaLookup-shared--pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-16 18:16:00,445 INFO [RS-EventLoopGroup-14-2] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:35618, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-16 18:16:00,451 DEBUG [hconnection-0x37ae9b1f-shared-pool-0] ipc.RpcConnection(124): Using SIMPLE authentication for service=ClientService, sasl=false 2023-07-16 18:16:00,452 INFO [RS-EventLoopGroup-15-1] ipc.ServerRpcConnection(540): Connection from 172.31.14.131:55120, version=2.4.18-SNAPSHOT, sasl=false, ugi=jenkins (auth:SIMPLE), service=ClientService 2023-07-16 18:16:00,454 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 18:16:00,454 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 18:16:00,456 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:32899] to rsgroup master 2023-07-16 18:16:00,456 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:32899 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 18:16:00,457 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] ipc.CallRunner(144): callId: 20 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:52678 deadline: 1689532560456, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:32899 is either offline or it does not exist. 2023-07-16 18:16:00,457 WARN [Listener at localhost/42859] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:32899 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor57.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:32899 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-16 18:16:00,458 INFO [Listener at localhost/42859] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 18:16:00,459 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 18:16:00,459 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 18:16:00,459 INFO [Listener at localhost/42859] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35551, jenkins-hbase4.apache.org:37401, jenkins-hbase4.apache.org:43051, jenkins-hbase4.apache.org:46003], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-16 18:16:00,459 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-16 18:16:00,459 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 18:16:00,510 INFO [Listener at localhost/42859] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testRSGroupListDoesNotContainFailedTableCreation Thread=560 (was 514) Potentially hanging thread: RS-EventLoopGroup-15-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-11-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1474581235_17 at /127.0.0.1:56362 [Receiving block BP-861292803-172.31.14.131-1689531357576:blk_1073741834_1010] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-32 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=35551 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: 1278153789@qtp-926041004-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:44103 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=35551 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=43051 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager$Monitor@5e3362a3 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.blockmanagement.HeartbeatManager$Monitor.run(HeartbeatManager.java:451) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-66e1c64c-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1302818941-2245 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-14 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server idle connection scanner for port 35169 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: qtp2078094533-2307 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: BP-861292803-172.31.14.131-1689531357576 heartbeating to localhost/127.0.0.1:39679 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:158) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:715) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:846) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x2a21632f-shared-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/42859-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-16 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1130563439_17 at /127.0.0.1:46152 [Receiving block BP-861292803-172.31.14.131-1689531357576:blk_1073741832_1008] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=37401 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Timer-34 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=35551 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=46003 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:54881@0x63fcf2f8 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/147357128.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:39679/user/jenkins/test-data/46b660d4-bb55-7072-2fb4-691f81782058/MasterData-prefix:jenkins-hbase4.apache.org,32899,1689531358331 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-12-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-11-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-861292803-172.31.14.131-1689531357576:blk_1073741829_1005, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x3fe0ca69-shared-pool-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1056197214-2312 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/472517021.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server idle connection scanner for port 39679 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: Listener at localhost/42859-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:58951@0x4162bfb9-SendThread(127.0.0.1:58951) java.lang.Thread.sleep(Native Method) org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:1072) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1139) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=46003 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: LeaseRenewer:jenkins.hfs.4@localhost:40765 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=35551 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: hconnection-0x3fe0ca69-shared-pool-4 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1302818941-2241-acceptor-0@1a39c370-ServerConnector@5f6ec059{HTTP/1.1, (http/1.1)}{0.0.0.0:39345} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/42859-SendThread(127.0.0.1:54881) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1029098583_17 at /127.0.0.1:56410 [Waiting for operation #2] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1302818941-2242 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=43051 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: jenkins-hbase4:37401Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=46003 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-986144745_17 at /127.0.0.1:46104 [Waiting for operation #2] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/42859-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: IPC Client (1806136717) connection to localhost/127.0.0.1:39679 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: IPC Server handler 0 on default port 42859 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=37401 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=32899 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-8-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2097980102-2215 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689531359162 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.PriorityBlockingQueue.take(PriorityBlockingQueue.java:549) org.apache.hadoop.hbase.master.cleaner.HFileCleaner.consumerLoop(HFileCleaner.java:267) org.apache.hadoop.hbase.master.cleaner.HFileCleaner$2.run(HFileCleaner.java:251) Potentially hanging thread: IPC Server handler 4 on default port 39679 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: ForkJoinPool-2-worker-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.ForkJoinPool.awaitWork(ForkJoinPool.java:1824) java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1693) java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) Potentially hanging thread: RS-EventLoopGroup-14-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 1 on default port 33191 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RS:3;jenkins-hbase4:46003 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 1 on default port 42859 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RS-EventLoopGroup-8-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2097980102-2212 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/42859 java.lang.Thread.dumpThreads(Native Method) java.lang.Thread.getAllStackTraces(Thread.java:1615) org.apache.hadoop.hbase.ResourceCheckerJUnitListener$ThreadResourceAnalyzer.getVal(ResourceCheckerJUnitListener.java:49) org.apache.hadoop.hbase.ResourceChecker.fill(ResourceChecker.java:110) org.apache.hadoop.hbase.ResourceChecker.fillEndings(ResourceChecker.java:104) org.apache.hadoop.hbase.ResourceChecker.end(ResourceChecker.java:206) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.end(ResourceCheckerJUnitListener.java:165) org.apache.hadoop.hbase.ResourceCheckerJUnitListener.testFinished(ResourceCheckerJUnitListener.java:185) org.junit.runner.notification.SynchronizedRunListener.testFinished(SynchronizedRunListener.java:87) org.junit.runner.notification.RunNotifier$9.notifyListener(RunNotifier.java:225) org.junit.runner.notification.RunNotifier$SafeNotifier.run(RunNotifier.java:72) org.junit.runner.notification.RunNotifier.fireTestFinished(RunNotifier.java:222) org.junit.internal.runners.model.EachTestNotifier.fireTestFinished(EachTestNotifier.java:38) org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:372) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) java.util.concurrent.FutureTask.run(FutureTask.java:266) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.8@localhost:39679 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-12-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 2 on default port 42859 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:39679/user/jenkins/test-data/46b660d4-bb55-7072-2fb4-691f81782058-prefix:jenkins-hbase4.apache.org,35551,1689531358817 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=37401 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp2078094533-2303 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1806136717) connection to localhost/127.0.0.1:39679 from jenkins.hfs.7 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: java.util.concurrent.ThreadPoolExecutor$Worker@2a9298eb[State = -1, empty queue] sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2078094533-2306 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1806136717) connection to localhost/127.0.0.1:39679 from jenkins.hfs.10 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: IPC Server handler 3 on default port 35169 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RS-EventLoopGroup-14-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-16-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/42859-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5f6dd96a-5549-784d-879e-773f11a8ed4c/cluster_3cda5aa3-2ebb-c38f-dd35-e868f192707f/dfs/data/data4/current/BP-861292803-172.31.14.131-1689531357576 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2078094533-2305 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-15 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=37401 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:54881@0x632a472d sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/147357128.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase4:46003Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/42859-SendThread(127.0.0.1:54881) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RS-EventLoopGroup-16-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5f6dd96a-5549-784d-879e-773f11a8ed4c/cluster_3cda5aa3-2ebb-c38f-dd35-e868f192707f/dfs/data/data6) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=35551 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5f6dd96a-5549-784d-879e-773f11a8ed4c/cluster_3cda5aa3-2ebb-c38f-dd35-e868f192707f/dfs/data/data2/current/BP-861292803-172.31.14.131-1689531357576 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-562-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=32899 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1309410303-2272 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5f6dd96a-5549-784d-879e-773f11a8ed4c/cluster_3cda5aa3-2ebb-c38f-dd35-e868f192707f/dfs/data/data2) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: ProcessThread(sid:0 cport:54881): sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.server.PrepRequestProcessor.run(PrepRequestProcessor.java:134) Potentially hanging thread: qtp1309410303-2273 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-561-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=46003 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Server idle connection scanner for port 33191 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: RS-EventLoopGroup-14-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-861292803-172.31.14.131-1689531357576:blk_1073741835_1011, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/33941-SendThread(127.0.0.1:58951) java.lang.Thread.sleep(Native Method) org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:1072) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1139) Potentially hanging thread: 196436652@qtp-926945859-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:37071 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:54881@0x7c93a891-SendThread(127.0.0.1:54881) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: pool-552-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2078094533-2301-acceptor-0@1f9c90f6-ServerConnector@e9426d4{HTTP/1.1, (http/1.1)}{0.0.0.0:35553} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2097980102-2213 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1806136717) connection to localhost/127.0.0.1:40765 from jenkins.hfs.4 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: qtp2097980102-2216 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2097980102-2214 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-861292803-172.31.14.131-1689531357576:blk_1073741832_1008, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2078094533-2304 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins@localhost:39679 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.5@localhost:40765 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-13-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/42859.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=32899 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:54881@0x723c7667 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/147357128.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=46003 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Listener at localhost/42859-SendThread(127.0.0.1:54881) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=46003 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Listener at localhost/42859-SendThread(127.0.0.1:54881) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: ForkJoinPool-2-worker-7 sun.misc.Unsafe.park(Native Method) java.util.concurrent.ForkJoinPool.awaitWork(ForkJoinPool.java:1824) java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1693) java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1130563439_17 at /127.0.0.1:56378 [Receiving block BP-861292803-172.31.14.131-1689531357576:blk_1073741835_1011] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: CacheReplicationMonitor(1770960085) sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163) org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor.run(CacheReplicationMonitor.java:181) Potentially hanging thread: qtp1056197214-2311 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/472517021.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-13-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=32899 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS:0;jenkins-hbase4:43051 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=0,queue=0,port=43051 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1309410303-2274 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-75842d05-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=46003 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: PacketResponder: BP-861292803-172.31.14.131-1689531357576:blk_1073741829_1005, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 980481013@qtp-2053191606-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: IPC Client (1806136717) connection to localhost/127.0.0.1:40765 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: qtp1309410303-2276 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=37401 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1309410303-2275 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1806136717) connection to localhost/127.0.0.1:40765 from jenkins.hfs.5 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: PacketResponder: BP-861292803-172.31.14.131-1689531357576:blk_1073741833_1009, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 3 on default port 42859 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=46003 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Timer-27 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1474581235_17 at /127.0.0.1:60184 [Waiting for operation #4] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read(BufferedInputStream.java:265) java.io.DataInputStream.readShort(DataInputStream.java:312) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:67) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:269) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-24 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: IPC Server handler 4 on default port 35169 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: jenkins-hbase4:32899 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.master.assignment.AssignmentManager.waitOnAssignQueue(AssignmentManager.java:2102) org.apache.hadoop.hbase.master.assignment.AssignmentManager.processAssignQueue(AssignmentManager.java:2124) org.apache.hadoop.hbase.master.assignment.AssignmentManager.access$600(AssignmentManager.java:104) org.apache.hadoop.hbase.master.assignment.AssignmentManager$1.run(AssignmentManager.java:2064) Potentially hanging thread: IPC Server handler 2 on default port 39679 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1130563439_17 at /127.0.0.1:60250 [Receiving block BP-861292803-172.31.14.131-1689531357576:blk_1073741835_1011] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1806136717) connection to localhost/127.0.0.1:39679 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: Timer-31 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: IPC Server handler 4 on default port 33191 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RS-EventLoopGroup-9-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1806136717) connection to localhost/127.0.0.1:40765 from jenkins.hfs.6 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=37401 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: hconnection-0x3fe0ca69-metaLookup-shared--pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-10-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:1;jenkins-hbase4:37401 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:54881@0x5da743fd-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: LeaseRenewer:jenkins@localhost:40765 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-8-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,32899,1689531358331 java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread.run(RSGroupInfoManagerImpl.java:797) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5f6dd96a-5549-784d-879e-773f11a8ed4c/cluster_3cda5aa3-2ebb-c38f-dd35-e868f192707f/dfs/data/data1) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-986144745_17 at /127.0.0.1:60222 [Receiving block BP-861292803-172.31.14.131-1689531357576:blk_1073741829_1005] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-566-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: nioEventLoopGroup-18-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:54881@0x632a472d-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5f6dd96a-5549-784d-879e-773f11a8ed4c/cluster_3cda5aa3-2ebb-c38f-dd35-e868f192707f/dfs/data/data1/current/BP-861292803-172.31.14.131-1689531357576 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1806136717) connection to localhost/127.0.0.1:39679 from jenkins.hfs.8 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@36612675 java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-28 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@28645427 java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1130563439_17 at /127.0.0.1:56350 [Receiving block BP-861292803-172.31.14.131-1689531357576:blk_1073741832_1008] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/42859.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: hconnection-0x37ae9b1f-metaLookup-shared--pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-11 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-986144745_17 at /127.0.0.1:56316 [Receiving block BP-861292803-172.31.14.131-1689531357576:blk_1073741829_1005] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 445605758@qtp-926041004-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: java.util.concurrent.ThreadPoolExecutor$Worker@a284561[State = -1, empty queue] sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-13 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:883) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:39679/user/jenkins/test-data/46b660d4-bb55-7072-2fb4-691f81782058-prefix:jenkins-hbase4.apache.org,43051,1689531358519 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:58951@0x4162bfb9 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/147357128.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp380244390-2579 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/472517021.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 1209443861@qtp-73238421-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: Listener at localhost/42859.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5f6dd96a-5549-784d-879e-773f11a8ed4c/cluster_3cda5aa3-2ebb-c38f-dd35-e868f192707f/dfs/data/data3/current/BP-861292803-172.31.14.131-1689531357576 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: jenkins-hbase4:43051Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:39679/user/jenkins/test-data/46b660d4-bb55-7072-2fb4-691f81782058-prefix:jenkins-hbase4.apache.org,37401,1689531358669.meta sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer for 'DataNode' metrics system java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:58951@0x4162bfb9-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: Timer-33 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: Timer-30 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: qtp1302818941-2244 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@237dc36a java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@2b9b49ae sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:113) org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:85) org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:145) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=32899 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1474581235_17 at /127.0.0.1:46178 [Receiving block BP-861292803-172.31.14.131-1689531357576:blk_1073741834_1010] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 0 on default port 39679 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=35551 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: org.apache.hadoop.hdfs.server.blockmanagement.PendingReplicationBlocks$PendingReplicationMonitor@15b382eb java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.blockmanagement.PendingReplicationBlocks$PendingReplicationMonitor.run(PendingReplicationBlocks.java:244) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/42859.LruBlockCache.EvictionThread java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.io.hfile.LruBlockCache$EvictionThread.run(LruBlockCache.java:902) Potentially hanging thread: pool-548-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-15-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1029098583_17 at /127.0.0.1:60232 [Receiving block BP-861292803-172.31.14.131-1689531357576:blk_1073741833_1009] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x3fe0ca69-metaLookup-shared--pool-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: NIOServerCxnFactory.AcceptThread:localhost/127.0.0.1:54881 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.zookeeper.server.NIOServerCnxnFactory$AcceptThread.select(NIOServerCnxnFactory.java:229) org.apache.zookeeper.server.NIOServerCnxnFactory$AcceptThread.run(NIOServerCnxnFactory.java:205) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:54881@0x5da743fd sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/147357128.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp380244390-2586 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-861292803-172.31.14.131-1689531357576:blk_1073741832_1008, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-861292803-172.31.14.131-1689531357576:blk_1073741833_1009, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:54881@0x399d1e8b-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: jenkins-hbase4:35551Replication Statistics #0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=32899 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: AsyncFSWAL-0-hdfs://localhost:39679/user/jenkins/test-data/46b660d4-bb55-7072-2fb4-691f81782058-prefix:jenkins-hbase4.apache.org,37401,1689531358669 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=2,queue=1,port=43051 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Listener at localhost/42859-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: qtp380244390-2580-acceptor-0@687373a9-ServerConnector@c24108f{HTTP/1.1, (http/1.1)}{0.0.0.0:33565} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp380244390-2583 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x3fe0ca69-shared-pool-3 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:2;jenkins-hbase4:35551-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 3 on default port 33191 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:54881@0x723c7667-SendThread(127.0.0.1:54881) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=46003 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:54881@0x63fcf2f8-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: qtp1302818941-2243 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1056197214-2317 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@1bbd09cb sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:113) org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:85) org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:145) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-547-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 2 on default port 33191 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: IPC Client (1806136717) connection to localhost/127.0.0.1:39679 from jenkins.hfs.9 java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:54881@0x7c93a891-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: Timer-29 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: qtp380244390-2585 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2097980102-2210-acceptor-0@150eba09-ServerConnector@421485aa{HTTP/1.1, (http/1.1)}{0.0.0.0:39933} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-9-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeResourceMonitor@7e374ea1 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeResourceMonitor.run(FSNamesystem.java:3842) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-2a64ec80-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-35 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: PacketResponder: BP-861292803-172.31.14.131-1689531357576:blk_1073741835_1011, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=37401 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=43051 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Listener at localhost/33941-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: nioEventLoopGroup-16-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Session-HouseKeeper-40742411-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.read.handler=1,queue=1,port=35551 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Listener at localhost/42859-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:54881@0x63fcf2f8-SendThread(127.0.0.1:54881) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: qtp380244390-2581 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x3fe0ca69-shared-pool-5 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1309410303-2277 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=35551 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp380244390-2582 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x3fe0ca69-shared-pool-2 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 0 on default port 33191 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RS-EventLoopGroup-13-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1130563439_17 at /127.0.0.1:46184 [Receiving block BP-861292803-172.31.14.131-1689531357576:blk_1073741835_1011] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=37401 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: pool-557-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=43051 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1474581235_17 at /127.0.0.1:60248 [Receiving block BP-861292803-172.31.14.131-1689531357576:blk_1073741834_1010] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-861292803-172.31.14.131-1689531357576:blk_1073741829_1005, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: java.util.concurrent.ThreadPoolExecutor$Worker@4a683e62[State = -1, empty queue] sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 0 on default port 35169 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: qtp1056197214-2314 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/472517021.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-861292803-172.31.14.131-1689531357576:blk_1073741834_1010, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RPCClient-NioEventLoopGroup-6-12 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:879) org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:526) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:3;jenkins-hbase4:46003-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=35551 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=43051 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1302818941-2247 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Server handler 3 on default port 39679 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RS-EventLoopGroup-9-1 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2078094533-2300 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/472517021.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-11-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: M:0;jenkins-hbase4:32899 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.master.HMaster.waitForMasterActive(HMaster.java:634) org.apache.hadoop.hbase.regionserver.HRegionServer.initializeZooKeeper(HRegionServer.java:957) org.apache.hadoop.hbase.regionserver.HRegionServer.preRegistrationInitialization(HRegionServer.java:904) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1006) org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:541) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: BP-861292803-172.31.14.131-1689531357576 heartbeating to localhost/127.0.0.1:39679 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:158) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:715) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:846) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-543-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-10-3 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-541-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1130563439_17 at /127.0.0.1:60230 [Receiving block BP-861292803-172.31.14.131-1689531357576:blk_1073741832_1008] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689531359147 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) org.apache.hadoop.hbase.master.cleaner.HFileCleaner.consumerLoop(HFileCleaner.java:267) org.apache.hadoop.hbase.master.cleaner.HFileCleaner$1.run(HFileCleaner.java:236) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=32899 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: nioEventLoopGroup-14-1 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:68) io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:803) io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:457) io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:54881@0x7c93a891 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/147357128.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:54881@0x6e985cec sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/147357128.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 1899323936@qtp-2053191606-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:34935 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: BP-861292803-172.31.14.131-1689531357576 heartbeating to localhost/127.0.0.1:39679 java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.waitTillNextIBR(IncrementalBlockReportManager.java:158) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:715) org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:846) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=37401 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5f6dd96a-5549-784d-879e-773f11a8ed4c/cluster_3cda5aa3-2ebb-c38f-dd35-e868f192707f/dfs/data/data5/current/BP-861292803-172.31.14.131-1689531357576 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:0;jenkins-hbase4:43051-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 132733128@qtp-926945859-0 java.lang.Object.wait(Native Method) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1029098583_17 at /127.0.0.1:56352 [Receiving block BP-861292803-172.31.14.131-1689531357576:blk_1073741833_1009] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:54881@0x6e985cec-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=43051 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=1,queue=0,port=46003 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: Session-HouseKeeper-1cfaac5f-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:54881@0x6e985cec-SendThread(127.0.0.1:54881) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@644ab454 sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:113) org.apache.hadoop.hdfs.net.TcpPeerServer.accept(TcpPeerServer.java:85) org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:145) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.6@localhost:40765 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:2;jenkins-hbase4:35551 java.lang.Object.wait(Native Method) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:81) org.apache.hadoop.hbase.util.Sleeper.sleep(Sleeper.java:64) org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1092) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:175) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:123) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:159) java.security.AccessController.doPrivileged(Native Method) javax.security.auth.Subject.doAs(Subject.java:360) org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1873) org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:319) org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:156) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1056197214-2316 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS:1;jenkins-hbase4:37401-longCompactions-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) org.apache.hadoop.hbase.util.StealJobQueue.take(StealJobQueue.java:101) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RS-EventLoopGroup-12-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=0,queue=0,port=37401 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5f6dd96a-5549-784d-879e-773f11a8ed4c/cluster_3cda5aa3-2ebb-c38f-dd35-e868f192707f/dfs/data/data5) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor@cce92eb java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor.run(LeaseManager.java:528) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-861292803-172.31.14.131-1689531357576:blk_1073741834_1010, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:54881@0x399d1e8b sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.DelayQueue.poll(DelayQueue.java:259) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:324) org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$$Lambda$93/147357128.run(Unknown Source) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5f6dd96a-5549-784d-879e-773f11a8ed4c/cluster_3cda5aa3-2ebb-c38f-dd35-e868f192707f/dfs/data/data6/current/BP-861292803-172.31.14.131-1689531357576 java.lang.Thread.sleep(Native Method) org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:179) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Timer-26 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: Timer-25 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeEditLogRoller@9c5b3bd java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeEditLogRoller.run(FSNamesystem.java:3884) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=32899 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: IPC Server handler 4 on default port 42859 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: org.apache.hadoop.hdfs.server.namenode.FSNamesystem$LazyPersistFileScrubber@5c289ca7 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.server.namenode.FSNamesystem$LazyPersistFileScrubber.run(FSNamesystem.java:3975) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_1029098583_17 at /127.0.0.1:46164 [Receiving block BP-861292803-172.31.14.131-1689531357576:blk_1073741833_1009] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/42859-SendThread(127.0.0.1:54881) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: DataXceiver for client DFSClient_NONMAPREDUCE_-986144745_17 at /127.0.0.1:46136 [Receiving block BP-861292803-172.31.14.131-1689531357576:blk_1073741829_1005] sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335) org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) java.io.BufferedInputStream.fill(BufferedInputStream.java:246) java.io.BufferedInputStream.read1(BufferedInputStream.java:286) java.io.BufferedInputStream.read(BufferedInputStream.java:345) java.io.DataInputStream.read(DataInputStream.java:149) org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:209) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:211) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:528) org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:968) org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:877) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:166) org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:103) org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:290) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: LeaseRenewer:jenkins.hfs.7@localhost:39679 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2078094533-2302 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1056197214-2318 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5f6dd96a-5549-784d-879e-773f11a8ed4c/cluster_3cda5aa3-2ebb-c38f-dd35-e868f192707f/dfs/data/data3) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: IPC Server handler 2 on default port 35169 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: RS-EventLoopGroup-10-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:209) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:202) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWaitNoTimerChange(EpollEventLoop.java:306) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:363) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: Listener at localhost/42859-SendThread(127.0.0.1:54881) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: qtp1056197214-2313 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/472517021.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.metaPriority.FPBQ.Fifo.handler=0,queue=0,port=32899 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: RS-EventLoopGroup-15-2 org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait0(Native Method) org.apache.hbase.thirdparty.io.netty.channel.epoll.Native.epollWait(Native.java:182) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.epollWait(EpollEventLoop.java:302) org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:366) org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:54881@0x723c7667-EventThread sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:506) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:54881@0x632a472d-SendThread(127.0.0.1:54881) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: IPC Server idle connection scanner for port 42859 java.lang.Object.wait(Native Method) java.util.TimerThread.mainLoop(Timer.java:552) java.util.TimerThread.run(Timer.java:505) Potentially hanging thread: PacketResponder: BP-861292803-172.31.14.131-1689531357576:blk_1073741833_1009, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.priority.RWQ.Fifo.write.handler=0,queue=0,port=43051 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.getCallRunner(RpcExecutor.java:340) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp1302818941-2240 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/472517021.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread-jenkins-hbase4.apache.org,44131,1689531352332 java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hbase.rsgroup.RSGroupInfoManagerImpl$ServerEventsListenerThread.run(RSGroupInfoManagerImpl.java:797) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:54881@0x5da743fd-SendThread(127.0.0.1:54881) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: qtp1302818941-2246 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=2,queue=0,port=35551 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: qtp2097980102-2211 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1056197214-2315-acceptor-0@674e527b-ServerConnector@3033a3f0{HTTP/1.1, (http/1.1)}{0.0.0.0:36519} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: ReadOnlyZKClient-127.0.0.1:54881@0x399d1e8b-SendThread(127.0.0.1:54881) sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:345) org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1223) Potentially hanging thread: VolumeScannerThread(/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5f6dd96a-5549-784d-879e-773f11a8ed4c/cluster_3cda5aa3-2ebb-c38f-dd35-e868f192707f/dfs/data/data4) java.lang.Object.wait(Native Method) org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:627) Potentially hanging thread: IPC Server handler 1 on default port 39679 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: qtp380244390-2584 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) org.apache.hbase.thirdparty.org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:382) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.idleJobPoll(QueuedThreadPool.java:974) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1018) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: RpcServer.replication.FPBQ.Fifo.handler=1,queue=0,port=43051 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997) java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304) java.util.concurrent.Semaphore.acquire(Semaphore.java:312) org.apache.hadoop.hbase.ipc.FastPathBalancedQueueRpcExecutor$FastPathHandler.getCallRunner(FastPathBalancedQueueRpcExecutor.java:105) org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) Potentially hanging thread: LeaseRenewer:jenkins.hfs.9@localhost:39679 java.lang.Thread.sleep(Native Method) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.run(LeaseRenewer.java:411) org.apache.hadoop.hdfs.client.impl.LeaseRenewer.access$600(LeaseRenewer.java:76) org.apache.hadoop.hdfs.client.impl.LeaseRenewer$1.run(LeaseRenewer.java:307) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x37ae9b1f-shared-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: PacketResponder: BP-861292803-172.31.14.131-1689531357576:blk_1073741834_1010, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-546-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: IPC Client (1806136717) connection to localhost/127.0.0.1:40765 from jenkins java.lang.Object.wait(Native Method) org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:1035) org.apache.hadoop.ipc.Client$Connection.run(Client.java:1079) Potentially hanging thread: IPC Server handler 1 on default port 35169 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:294) org.apache.hadoop.ipc.Server$Handler.run(Server.java:2799) Potentially hanging thread: PacketResponder: BP-861292803-172.31.14.131-1689531357576:blk_1073741835_1011, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1309410303-2270 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/472517021.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp1309410303-2271-acceptor-0@3fc28387-ServerConnector@701fd4b6{HTTP/1.1, (http/1.1)}{0.0.0.0:40401} sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:421) sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:249) org.apache.hbase.thirdparty.org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:388) org.apache.hbase.thirdparty.org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:704) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: pool-553-thread-1 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: 585111834@qtp-73238421-1 - Acceptor0 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:35165 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:498) org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192) org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124) org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708) org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582) Potentially hanging thread: PacketResponder: BP-861292803-172.31.14.131-1689531357576:blk_1073741832_1008, type=LAST_IN_PIPELINE java.lang.Object.wait(Native Method) java.lang.Object.wait(Object.java:502) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.waitForAckHead(BlockReceiver.java:1327) org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1399) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: qtp2097980102-2209 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.nioSelect(ManagedSelector.java:183) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector.select(ManagedSelector.java:190) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:606) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:543) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produceTask(EatWhatYouKill.java:362) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:186) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:137) org.apache.hbase.thirdparty.org.eclipse.jetty.io.ManagedSelector$$Lambda$75/472517021.run(Unknown Source) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) org.apache.hbase.thirdparty.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: org.apache.hadoop.util.JvmPauseMonitor$Monitor@30d2c73a java.lang.Thread.sleep(Native Method) org.apache.hadoop.util.JvmPauseMonitor$Monitor.run(JvmPauseMonitor.java:192) java.lang.Thread.run(Thread.java:750) Potentially hanging thread: hconnection-0x3fe0ca69-shared-pool-0 sun.misc.Unsafe.park(Native Method) java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:750) - Thread LEAK? -, OpenFileDescriptor=847 (was 806) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=362 (was 409), ProcessCount=171 (was 171), AvailableMemoryMB=4916 (was 5053) 2023-07-16 18:16:00,513 WARN [Listener at localhost/42859] hbase.ResourceChecker(130): Thread=560 is superior to 500 2023-07-16 18:16:00,535 INFO [Listener at localhost/42859] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testNotMoveTableToNullRSGroupWhenCreatingExistingTable Thread=559, OpenFileDescriptor=846, MaxFileDescriptor=60000, SystemLoadAverage=362, ProcessCount=171, AvailableMemoryMB=4914 2023-07-16 18:16:00,535 WARN [Listener at localhost/42859] hbase.ResourceChecker(130): Thread=559 is superior to 500 2023-07-16 18:16:00,535 INFO [Listener at localhost/42859] rsgroup.TestRSGroupsBase(132): testNotMoveTableToNullRSGroupWhenCreatingExistingTable 2023-07-16 18:16:00,540 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 18:16:00,540 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 18:16:00,540 INFO [RS:3;jenkins-hbase4:46003] wal.AbstractFSWAL(489): WAL configuration: blocksize=256 MB, rollsize=128 MB, prefix=jenkins-hbase4.apache.org%2C46003%2C1689531360224, suffix=, logDir=hdfs://localhost:39679/user/jenkins/test-data/46b660d4-bb55-7072-2fb4-691f81782058/WALs/jenkins-hbase4.apache.org,46003,1689531360224, archiveDir=hdfs://localhost:39679/user/jenkins/test-data/46b660d4-bb55-7072-2fb4-691f81782058/oldWALs, maxLogs=32 2023-07-16 18:16:00,541 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-16 18:16:00,541 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-16 18:16:00,541 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 18:16:00,542 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-16 18:16:00,542 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 18:16:00,543 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-16 18:16:00,548 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 18:16:00,548 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-16 18:16:00,551 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-16 18:16:00,555 INFO [Listener at localhost/42859] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-16 18:16:00,556 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-16 18:16:00,559 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 18:16:00,559 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 18:16:00,566 DEBUG [RS-EventLoopGroup-16-1] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:38727,DS-e508de1e-751d-4f9c-86b1-7c85540a99e8,DISK] 2023-07-16 18:16:00,566 DEBUG [RS-EventLoopGroup-16-2] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:32865,DS-63b2dce7-ee25-4e15-8bc1-af885cb05c90,DISK] 2023-07-16 18:16:00,567 DEBUG [RS-EventLoopGroup-16-3] asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(809): SASL client skipping handshake in unsecured configuration for addr = 127.0.0.1/127.0.0.1, datanodeId = DatanodeInfoWithStorage[127.0.0.1:41671,DS-772c744a-be2a-46db-aa0e-2db3a2bde8ed,DISK] 2023-07-16 18:16:00,567 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-16 18:16:00,569 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-16 18:16:00,575 INFO [RS:3;jenkins-hbase4:46003] wal.AbstractFSWAL(806): New WAL /user/jenkins/test-data/46b660d4-bb55-7072-2fb4-691f81782058/WALs/jenkins-hbase4.apache.org,46003,1689531360224/jenkins-hbase4.apache.org%2C46003%2C1689531360224.1689531360541 2023-07-16 18:16:00,575 DEBUG [RS:3;jenkins-hbase4:46003] wal.AbstractFSWAL(887): Create new AsyncFSWAL writer with pipeline: [DatanodeInfoWithStorage[127.0.0.1:38727,DS-e508de1e-751d-4f9c-86b1-7c85540a99e8,DISK], DatanodeInfoWithStorage[127.0.0.1:41671,DS-772c744a-be2a-46db-aa0e-2db3a2bde8ed,DISK], DatanodeInfoWithStorage[127.0.0.1:32865,DS-63b2dce7-ee25-4e15-8bc1-af885cb05c90,DISK]] 2023-07-16 18:16:00,577 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 18:16:00,577 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 18:16:00,579 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:32899] to rsgroup master 2023-07-16 18:16:00,579 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:32899 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 18:16:00,579 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] ipc.CallRunner(144): callId: 48 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:52678 deadline: 1689532560579, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:32899 is either offline or it does not exist. 2023-07-16 18:16:00,579 WARN [Listener at localhost/42859] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:32899 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor57.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:32899 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-16 18:16:00,581 INFO [Listener at localhost/42859] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 18:16:00,581 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 18:16:00,582 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 18:16:00,582 INFO [Listener at localhost/42859] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35551, jenkins-hbase4.apache.org:37401, jenkins-hbase4.apache.org:43051, jenkins-hbase4.apache.org:46003], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-16 18:16:00,582 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-16 18:16:00,582 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 18:16:00,584 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 't1', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'cf1', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-16 18:16:00,585 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] procedure2.ProcedureExecutor(1029): Stored pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=t1 2023-07-16 18:16:00,587 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-16 18:16:00,587 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] master.MasterRpcServices(700): Client=jenkins//172.31.14.131 procedure request for creating table: namespace: "default" qualifier: "t1" procId is: 12 2023-07-16 18:16:00,587 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-16 18:16:00,588 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 18:16:00,589 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 18:16:00,589 DEBUG [PEWorker-3] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-16 18:16:00,591 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_WRITE_FS_LAYOUT, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_WRITE_FS_LAYOUT 2023-07-16 18:16:00,592 DEBUG [HFileArchiver-7] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:39679/user/jenkins/test-data/46b660d4-bb55-7072-2fb4-691f81782058/.tmp/data/default/t1/5e8419c1d565c20948145bc127a06bf3 2023-07-16 18:16:00,593 DEBUG [HFileArchiver-7] backup.HFileArchiver(153): Directory hdfs://localhost:39679/user/jenkins/test-data/46b660d4-bb55-7072-2fb4-691f81782058/.tmp/data/default/t1/5e8419c1d565c20948145bc127a06bf3 empty. 2023-07-16 18:16:00,593 DEBUG [HFileArchiver-7] backup.HFileArchiver(599): Failed to delete directory hdfs://localhost:39679/user/jenkins/test-data/46b660d4-bb55-7072-2fb4-691f81782058/.tmp/data/default/t1/5e8419c1d565c20948145bc127a06bf3 2023-07-16 18:16:00,593 DEBUG [PEWorker-3] procedure.DeleteTableProcedure(328): Archived t1 regions 2023-07-16 18:16:00,609 DEBUG [PEWorker-3] util.FSTableDescriptors(570): Wrote into hdfs://localhost:39679/user/jenkins/test-data/46b660d4-bb55-7072-2fb4-691f81782058/.tmp/data/default/t1/.tabledesc/.tableinfo.0000000001 2023-07-16 18:16:00,611 INFO [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(7675): creating {ENCODED => 5e8419c1d565c20948145bc127a06bf3, NAME => 't1,,1689531360584.5e8419c1d565c20948145bc127a06bf3.', STARTKEY => '', ENDKEY => ''}, tableDescriptor='t1', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'cf1', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}, regionDir=hdfs://localhost:39679/user/jenkins/test-data/46b660d4-bb55-7072-2fb4-691f81782058/.tmp 2023-07-16 18:16:00,623 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(866): Instantiated t1,,1689531360584.5e8419c1d565c20948145bc127a06bf3.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 18:16:00,623 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1604): Closing 5e8419c1d565c20948145bc127a06bf3, disabling compactions & flushes 2023-07-16 18:16:00,623 INFO [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1626): Closing region t1,,1689531360584.5e8419c1d565c20948145bc127a06bf3. 2023-07-16 18:16:00,623 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1647): Waiting without time limit for close lock on t1,,1689531360584.5e8419c1d565c20948145bc127a06bf3. 2023-07-16 18:16:00,624 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1714): Acquired close lock on t1,,1689531360584.5e8419c1d565c20948145bc127a06bf3. after waiting 0 ms 2023-07-16 18:16:00,624 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1724): Updates disabled for region t1,,1689531360584.5e8419c1d565c20948145bc127a06bf3. 2023-07-16 18:16:00,624 INFO [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1838): Closed t1,,1689531360584.5e8419c1d565c20948145bc127a06bf3. 2023-07-16 18:16:00,624 DEBUG [RegionOpenAndInit-t1-pool-0] regionserver.HRegion(1558): Region close journal for 5e8419c1d565c20948145bc127a06bf3: 2023-07-16 18:16:00,626 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ADD_TO_META, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_ADD_TO_META 2023-07-16 18:16:00,627 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":2,"row":"t1,,1689531360584.5e8419c1d565c20948145bc127a06bf3.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689531360627"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689531360627"}]},"ts":"1689531360627"} 2023-07-16 18:16:00,629 INFO [PEWorker-3] hbase.MetaTableAccessor(1496): Added 1 regions to meta. 2023-07-16 18:16:00,629 INFO [PEWorker-3] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_ASSIGN_REGIONS, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_ASSIGN_REGIONS 2023-07-16 18:16:00,630 DEBUG [PEWorker-3] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689531360630"}]},"ts":"1689531360630"} 2023-07-16 18:16:00,631 INFO [PEWorker-3] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=ENABLING in hbase:meta 2023-07-16 18:16:00,635 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(334): Hosts are {jenkins-hbase4.apache.org=0} racks are {/default-rack=0} 2023-07-16 18:16:00,635 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 0 is on host 0 2023-07-16 18:16:00,635 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 1 is on host 0 2023-07-16 18:16:00,635 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 2 is on host 0 2023-07-16 18:16:00,635 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(362): server 3 is on host 0 2023-07-16 18:16:00,635 DEBUG [PEWorker-3] balancer.BaseLoadBalancer$Cluster(378): Number of tables=1, number of hosts=1, number of racks=1 2023-07-16 18:16:00,635 INFO [PEWorker-3] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=t1, region=5e8419c1d565c20948145bc127a06bf3, ASSIGN}] 2023-07-16 18:16:00,636 INFO [PEWorker-5] procedure.MasterProcedureScheduler(727): Took xlock for pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE; TransitRegionStateProcedure table=t1, region=5e8419c1d565c20948145bc127a06bf3, ASSIGN 2023-07-16 18:16:00,637 INFO [PEWorker-5] assignment.TransitRegionStateProcedure(193): Starting pid=13, ppid=12, state=RUNNABLE:REGION_STATE_TRANSITION_GET_ASSIGN_CANDIDATE, locked=true; TransitRegionStateProcedure table=t1, region=5e8419c1d565c20948145bc127a06bf3, ASSIGN; state=OFFLINE, location=jenkins-hbase4.apache.org,37401,1689531358669; forceNewPlan=false, retain=false 2023-07-16 18:16:00,689 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-16 18:16:00,787 INFO [jenkins-hbase4:32899] balancer.BaseLoadBalancer(1545): Reassigned 1 regions. 1 retained the pre-restart assignment. 2023-07-16 18:16:00,789 INFO [PEWorker-4] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=5e8419c1d565c20948145bc127a06bf3, regionState=OPENING, regionLocation=jenkins-hbase4.apache.org,37401,1689531358669 2023-07-16 18:16:00,789 DEBUG [PEWorker-4] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"t1,,1689531360584.5e8419c1d565c20948145bc127a06bf3.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689531360789"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689531360789"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689531360789"}]},"ts":"1689531360789"} 2023-07-16 18:16:00,790 INFO [PEWorker-4] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=14, ppid=13, state=RUNNABLE; OpenRegionProcedure 5e8419c1d565c20948145bc127a06bf3, server=jenkins-hbase4.apache.org,37401,1689531358669}] 2023-07-16 18:16:00,890 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-16 18:16:00,945 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(130): Open t1,,1689531360584.5e8419c1d565c20948145bc127a06bf3. 2023-07-16 18:16:00,945 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7854): Opening region: {ENCODED => 5e8419c1d565c20948145bc127a06bf3, NAME => 't1,,1689531360584.5e8419c1d565c20948145bc127a06bf3.', STARTKEY => '', ENDKEY => ''} 2023-07-16 18:16:00,946 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.MetricsRegionSourceImpl(79): Creating new MetricsRegionSourceImpl for table t1 5e8419c1d565c20948145bc127a06bf3 2023-07-16 18:16:00,946 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(866): Instantiated t1,,1689531360584.5e8419c1d565c20948145bc127a06bf3.; StoreHotnessProtector, parallelPutToStoreThreadLimit=10 ; minColumnNum=100 ; preparePutThreadLimit=20 ; hotProtect now enable 2023-07-16 18:16:00,946 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7894): checking encryption for 5e8419c1d565c20948145bc127a06bf3 2023-07-16 18:16:00,946 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(7897): checking classloading for 5e8419c1d565c20948145bc127a06bf3 2023-07-16 18:16:00,947 INFO [StoreOpener-5e8419c1d565c20948145bc127a06bf3-1] regionserver.HStore(381): Created cacheConfig: cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false, for column family cf1 of region 5e8419c1d565c20948145bc127a06bf3 2023-07-16 18:16:00,948 DEBUG [StoreOpener-5e8419c1d565c20948145bc127a06bf3-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39679/user/jenkins/test-data/46b660d4-bb55-7072-2fb4-691f81782058/data/default/t1/5e8419c1d565c20948145bc127a06bf3/cf1 2023-07-16 18:16:00,948 DEBUG [StoreOpener-5e8419c1d565c20948145bc127a06bf3-1] util.CommonFSUtils(522): Set storagePolicy=HOT for path=hdfs://localhost:39679/user/jenkins/test-data/46b660d4-bb55-7072-2fb4-691f81782058/data/default/t1/5e8419c1d565c20948145bc127a06bf3/cf1 2023-07-16 18:16:00,949 INFO [StoreOpener-5e8419c1d565c20948145bc127a06bf3-1] compactions.CompactionConfiguration(173): size [minCompactSize:128 MB, maxCompactSize:8.00 EB, offPeakMaxCompactSize:8.00 EB); files [minFilesToCompact:3, maxFilesToCompact:10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000; tiered compaction: max_age 9223372036854775807, incoming window min 6, compaction policy for tiered window org.apache.hadoop.hbase.regionserver.compactions.ExploringCompactionPolicy, single output for minor true, compaction window factory org.apache.hadoop.hbase.regionserver.compactions.ExponentialCompactionWindowFactory, region 5e8419c1d565c20948145bc127a06bf3 columnFamilyName cf1 2023-07-16 18:16:00,949 INFO [StoreOpener-5e8419c1d565c20948145bc127a06bf3-1] regionserver.HStore(310): Store=5e8419c1d565c20948145bc127a06bf3/cf1, memstore type=DefaultMemStore, storagePolicy=HOT, verifyBulkLoads=false, parallelPutCountPrintThreshold=50, encoding=NONE, compression=NONE 2023-07-16 18:16:00,950 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:39679/user/jenkins/test-data/46b660d4-bb55-7072-2fb4-691f81782058/data/default/t1/5e8419c1d565c20948145bc127a06bf3 2023-07-16 18:16:00,950 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(5209): Found 0 recovered edits file(s) under hdfs://localhost:39679/user/jenkins/test-data/46b660d4-bb55-7072-2fb4-691f81782058/data/default/t1/5e8419c1d565c20948145bc127a06bf3 2023-07-16 18:16:00,952 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1055): writing seq id for 5e8419c1d565c20948145bc127a06bf3 2023-07-16 18:16:00,954 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:39679/user/jenkins/test-data/46b660d4-bb55-7072-2fb4-691f81782058/data/default/t1/5e8419c1d565c20948145bc127a06bf3/recovered.edits/1.seqid, newMaxSeqId=1, maxSeqId=-1 2023-07-16 18:16:00,954 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1072): Opened 5e8419c1d565c20948145bc127a06bf3; next sequenceid=2; SteppingSplitPolicysuper{IncreasingToUpperBoundRegionSplitPolicy{initialSize=268435456, ConstantSizeRegionSplitPolicy{desiredMaxFileSize=11012532160, jitterRate=0.02562198042869568}}}, FlushLargeStoresPolicy{flushSizeLowerBound=-1} 2023-07-16 18:16:00,955 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(965): Region open journal for 5e8419c1d565c20948145bc127a06bf3: 2023-07-16 18:16:00,955 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2336): Post open deploy tasks for t1,,1689531360584.5e8419c1d565c20948145bc127a06bf3., pid=14, masterSystemTime=1689531360942 2023-07-16 18:16:00,957 DEBUG [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionServer(2363): Finished post open deploy task for t1,,1689531360584.5e8419c1d565c20948145bc127a06bf3. 2023-07-16 18:16:00,957 INFO [RS_OPEN_REGION-regionserver/jenkins-hbase4:0-0] handler.AssignRegionHandler(158): Opened t1,,1689531360584.5e8419c1d565c20948145bc127a06bf3. 2023-07-16 18:16:00,957 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=13 updating hbase:meta row=5e8419c1d565c20948145bc127a06bf3, regionState=OPEN, openSeqNum=2, regionLocation=jenkins-hbase4.apache.org,37401,1689531358669 2023-07-16 18:16:00,957 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":5,"row":"t1,,1689531360584.5e8419c1d565c20948145bc127a06bf3.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689531360957"},{"qualifier":"server","vlen":31,"tag":[],"timestamp":"1689531360957"},{"qualifier":"serverstartcode","vlen":8,"tag":[],"timestamp":"1689531360957"},{"qualifier":"seqnumDuringOpen","vlen":8,"tag":[],"timestamp":"1689531360957"}]},"ts":"1689531360957"} 2023-07-16 18:16:00,960 INFO [PEWorker-1] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=14, resume processing ppid=13 2023-07-16 18:16:00,960 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=14, ppid=13, state=SUCCESS; OpenRegionProcedure 5e8419c1d565c20948145bc127a06bf3, server=jenkins-hbase4.apache.org,37401,1689531358669 in 168 msec 2023-07-16 18:16:00,961 INFO [PEWorker-3] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=13, resume processing ppid=12 2023-07-16 18:16:00,961 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=13, ppid=12, state=SUCCESS; TransitRegionStateProcedure table=t1, region=5e8419c1d565c20948145bc127a06bf3, ASSIGN in 325 msec 2023-07-16 18:16:00,962 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_UPDATE_DESC_CACHE, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_UPDATE_DESC_CACHE 2023-07-16 18:16:00,962 DEBUG [PEWorker-5] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689531360962"}]},"ts":"1689531360962"} 2023-07-16 18:16:00,963 INFO [PEWorker-5] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=ENABLED in hbase:meta 2023-07-16 18:16:00,965 INFO [PEWorker-5] procedure.CreateTableProcedure(80): pid=12, state=RUNNABLE:CREATE_TABLE_POST_OPERATION, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_POST_OPERATION 2023-07-16 18:16:00,966 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=12, state=SUCCESS; CreateTableProcedure table=t1 in 381 msec 2023-07-16 18:16:01,191 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] master.MasterRpcServices(1230): Checking to see if procedure is done pid=12 2023-07-16 18:16:01,191 INFO [Listener at localhost/42859] client.HBaseAdmin$TableFuture(3541): Operation: CREATE, Table Name: default:t1, procId: 12 completed 2023-07-16 18:16:01,191 DEBUG [Listener at localhost/42859] hbase.HBaseTestingUtility(3430): Waiting until all regions of table t1 get assigned. Timeout = 60000ms 2023-07-16 18:16:01,191 INFO [Listener at localhost/42859] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 18:16:01,193 INFO [Listener at localhost/42859] hbase.HBaseTestingUtility(3484): All regions for table t1 assigned to meta. Checking AM states. 2023-07-16 18:16:01,194 INFO [Listener at localhost/42859] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 18:16:01,194 INFO [Listener at localhost/42859] hbase.HBaseTestingUtility(3504): All regions for table t1 assigned. 2023-07-16 18:16:01,195 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] master.HMaster$4(2112): Client=jenkins//172.31.14.131 create 't1', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '1'}}, {NAME => 'cf1', BLOOMFILTER => 'NONE', IN_MEMORY => 'false', VERSIONS => '1', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', COMPRESSION => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'} 2023-07-16 18:16:01,196 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] procedure2.ProcedureExecutor(1029): Stored pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION; CreateTableProcedure table=t1 2023-07-16 18:16:01,198 INFO [PEWorker-4] procedure.CreateTableProcedure(80): pid=15, state=RUNNABLE:CREATE_TABLE_PRE_OPERATION, locked=true; CreateTableProcedure table=t1 execute state=CREATE_TABLE_PRE_OPERATION 2023-07-16 18:16:01,199 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.TableExistsException: t1 at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.prepareCreate(CreateTableProcedure.java:243) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:85) at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:53) at org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:188) at org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:922) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1646) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.executeProcedure(ProcedureExecutor.java:1392) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$1100(ProcedureExecutor.java:73) at org.apache.hadoop.hbase.procedure2.ProcedureExecutor$WorkerThread.run(ProcedureExecutor.java:1964) 2023-07-16 18:16:01,200 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] ipc.CallRunner(144): callId: 65 service: MasterService methodName: CreateTable size: 354 connection: 172.31.14.131:52678 deadline: 1689531421195, exception=org.apache.hadoop.hbase.TableExistsException: t1 2023-07-16 18:16:01,201 INFO [Listener at localhost/42859] hbase.Waiter(180): Waiting up to [5,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 18:16:01,203 INFO [PEWorker-4] procedure2.ProcedureExecutor(1528): Rolled back pid=15, state=ROLLEDBACK, exception=org.apache.hadoop.hbase.TableExistsException via master-create-table:org.apache.hadoop.hbase.TableExistsException: t1; CreateTableProcedure table=t1 exec-time=7 msec 2023-07-16 18:16:01,302 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-16 18:16:01,302 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 18:16:01,303 INFO [Listener at localhost/42859] client.HBaseAdmin$15(890): Started disable of t1 2023-07-16 18:16:01,303 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] master.HMaster$11(2418): Client=jenkins//172.31.14.131 disable t1 2023-07-16 18:16:01,304 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] procedure2.ProcedureExecutor(1029): Stored pid=16, state=RUNNABLE:DISABLE_TABLE_PREPARE; DisableTableProcedure table=t1 2023-07-16 18:16:01,306 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-16 18:16:01,307 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689531361307"}]},"ts":"1689531361307"} 2023-07-16 18:16:01,308 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=DISABLING in hbase:meta 2023-07-16 18:16:01,309 INFO [PEWorker-2] procedure.DisableTableProcedure(293): Set t1 to state=DISABLING 2023-07-16 18:16:01,310 INFO [PEWorker-2] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=17, ppid=16, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=t1, region=5e8419c1d565c20948145bc127a06bf3, UNASSIGN}] 2023-07-16 18:16:01,311 INFO [PEWorker-1] procedure.MasterProcedureScheduler(727): Took xlock for pid=17, ppid=16, state=RUNNABLE:REGION_STATE_TRANSITION_CLOSE; TransitRegionStateProcedure table=t1, region=5e8419c1d565c20948145bc127a06bf3, UNASSIGN 2023-07-16 18:16:01,311 INFO [PEWorker-1] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=5e8419c1d565c20948145bc127a06bf3, regionState=CLOSING, regionLocation=jenkins-hbase4.apache.org,37401,1689531358669 2023-07-16 18:16:01,311 DEBUG [PEWorker-1] assignment.RegionStateStore(405): Put {"totalColumns":3,"row":"t1,,1689531360584.5e8419c1d565c20948145bc127a06bf3.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689531361311"},{"qualifier":"sn","vlen":45,"tag":[],"timestamp":"1689531361311"},{"qualifier":"state","vlen":7,"tag":[],"timestamp":"1689531361311"}]},"ts":"1689531361311"} 2023-07-16 18:16:01,312 INFO [PEWorker-1] procedure2.ProcedureExecutor(1681): Initialized subprocedures=[{pid=18, ppid=17, state=RUNNABLE; CloseRegionProcedure 5e8419c1d565c20948145bc127a06bf3, server=jenkins-hbase4.apache.org,37401,1689531358669}] 2023-07-16 18:16:01,406 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-16 18:16:01,406 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver Metrics about HBase MasterObservers 2023-07-16 18:16:01,406 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-16 18:16:01,406 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering RegionServer,sub=Coprocessor.Region.CP_org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint Metrics about HBase RegionObservers 2023-07-16 18:16:01,406 DEBUG [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(130): Registering adapter for the MetricRegistry: Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-16 18:16:01,406 INFO [HBase-Metrics2-1] impl.GlobalMetricRegistriesAdapter(134): Registering Master,sub=Coprocessor.Master.CP_org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint Metrics about HBase MasterObservers 2023-07-16 18:16:01,408 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-16 18:16:01,464 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(111): Close 5e8419c1d565c20948145bc127a06bf3 2023-07-16 18:16:01,464 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 5e8419c1d565c20948145bc127a06bf3, disabling compactions & flushes 2023-07-16 18:16:01,464 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region t1,,1689531360584.5e8419c1d565c20948145bc127a06bf3. 2023-07-16 18:16:01,464 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on t1,,1689531360584.5e8419c1d565c20948145bc127a06bf3. 2023-07-16 18:16:01,464 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on t1,,1689531360584.5e8419c1d565c20948145bc127a06bf3. after waiting 0 ms 2023-07-16 18:16:01,465 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region t1,,1689531360584.5e8419c1d565c20948145bc127a06bf3. 2023-07-16 18:16:01,468 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:39679/user/jenkins/test-data/46b660d4-bb55-7072-2fb4-691f81782058/data/default/t1/5e8419c1d565c20948145bc127a06bf3/recovered.edits/4.seqid, newMaxSeqId=4, maxSeqId=1 2023-07-16 18:16:01,469 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed t1,,1689531360584.5e8419c1d565c20948145bc127a06bf3. 2023-07-16 18:16:01,469 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 5e8419c1d565c20948145bc127a06bf3: 2023-07-16 18:16:01,470 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.UnassignRegionHandler(149): Closed 5e8419c1d565c20948145bc127a06bf3 2023-07-16 18:16:01,470 INFO [PEWorker-5] assignment.RegionStateStore(219): pid=17 updating hbase:meta row=5e8419c1d565c20948145bc127a06bf3, regionState=CLOSED 2023-07-16 18:16:01,471 DEBUG [PEWorker-5] assignment.RegionStateStore(405): Put {"totalColumns":2,"row":"t1,,1689531360584.5e8419c1d565c20948145bc127a06bf3.","families":{"info":[{"qualifier":"regioninfo","vlen":36,"tag":[],"timestamp":"1689531361470"},{"qualifier":"state","vlen":6,"tag":[],"timestamp":"1689531361470"}]},"ts":"1689531361470"} 2023-07-16 18:16:01,473 INFO [PEWorker-5] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=18, resume processing ppid=17 2023-07-16 18:16:01,473 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=18, ppid=17, state=SUCCESS; CloseRegionProcedure 5e8419c1d565c20948145bc127a06bf3, server=jenkins-hbase4.apache.org,37401,1689531358669 in 160 msec 2023-07-16 18:16:01,474 INFO [PEWorker-4] procedure2.ProcedureExecutor(1824): Finished subprocedure pid=17, resume processing ppid=16 2023-07-16 18:16:01,474 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=17, ppid=16, state=SUCCESS; TransitRegionStateProcedure table=t1, region=5e8419c1d565c20948145bc127a06bf3, UNASSIGN in 163 msec 2023-07-16 18:16:01,475 DEBUG [PEWorker-2] hbase.MetaTableAccessor(2093): Put {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":2,"tag":[],"timestamp":"1689531361475"}]},"ts":"1689531361475"} 2023-07-16 18:16:01,476 INFO [PEWorker-2] hbase.MetaTableAccessor(1635): Updated tableName=t1, state=DISABLED in hbase:meta 2023-07-16 18:16:01,479 INFO [PEWorker-2] procedure.DisableTableProcedure(305): Set t1 to state=DISABLED 2023-07-16 18:16:01,480 INFO [PEWorker-2] procedure2.ProcedureExecutor(1410): Finished pid=16, state=SUCCESS; DisableTableProcedure table=t1 in 176 msec 2023-07-16 18:16:01,609 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] master.MasterRpcServices(1230): Checking to see if procedure is done pid=16 2023-07-16 18:16:01,609 INFO [Listener at localhost/42859] client.HBaseAdmin$TableFuture(3541): Operation: DISABLE, Table Name: default:t1, procId: 16 completed 2023-07-16 18:16:01,610 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] master.HMaster$5(2228): Client=jenkins//172.31.14.131 delete t1 2023-07-16 18:16:01,611 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] procedure2.ProcedureExecutor(1029): Stored pid=19, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION; DeleteTableProcedure table=t1 2023-07-16 18:16:01,613 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(101): Waiting for RIT for pid=19, state=RUNNABLE:DELETE_TABLE_PRE_OPERATION, locked=true; DeleteTableProcedure table=t1 2023-07-16 18:16:01,613 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupAdminEndpoint(577): Removing deleted table 't1' from rsgroup 'default' 2023-07-16 18:16:01,614 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(113): Deleting regions from filesystem for pid=19, state=RUNNABLE:DELETE_TABLE_CLEAR_FS_LAYOUT, locked=true; DeleteTableProcedure table=t1 2023-07-16 18:16:01,615 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 18:16:01,615 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 18:16:01,616 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-16 18:16:01,617 DEBUG [HFileArchiver-8] backup.HFileArchiver(131): ARCHIVING hdfs://localhost:39679/user/jenkins/test-data/46b660d4-bb55-7072-2fb4-691f81782058/.tmp/data/default/t1/5e8419c1d565c20948145bc127a06bf3 2023-07-16 18:16:01,618 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-16 18:16:01,619 DEBUG [HFileArchiver-8] backup.HFileArchiver(159): Archiving [FileablePath, hdfs://localhost:39679/user/jenkins/test-data/46b660d4-bb55-7072-2fb4-691f81782058/.tmp/data/default/t1/5e8419c1d565c20948145bc127a06bf3/cf1, FileablePath, hdfs://localhost:39679/user/jenkins/test-data/46b660d4-bb55-7072-2fb4-691f81782058/.tmp/data/default/t1/5e8419c1d565c20948145bc127a06bf3/recovered.edits] 2023-07-16 18:16:01,623 DEBUG [HFileArchiver-8] backup.HFileArchiver(582): Archived from FileablePath, hdfs://localhost:39679/user/jenkins/test-data/46b660d4-bb55-7072-2fb4-691f81782058/.tmp/data/default/t1/5e8419c1d565c20948145bc127a06bf3/recovered.edits/4.seqid to hdfs://localhost:39679/user/jenkins/test-data/46b660d4-bb55-7072-2fb4-691f81782058/archive/data/default/t1/5e8419c1d565c20948145bc127a06bf3/recovered.edits/4.seqid 2023-07-16 18:16:01,624 DEBUG [HFileArchiver-8] backup.HFileArchiver(596): Deleted hdfs://localhost:39679/user/jenkins/test-data/46b660d4-bb55-7072-2fb4-691f81782058/.tmp/data/default/t1/5e8419c1d565c20948145bc127a06bf3 2023-07-16 18:16:01,624 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(328): Archived t1 regions 2023-07-16 18:16:01,626 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(118): Deleting regions from META for pid=19, state=RUNNABLE:DELETE_TABLE_REMOVE_FROM_META, locked=true; DeleteTableProcedure table=t1 2023-07-16 18:16:01,628 WARN [PEWorker-1] procedure.DeleteTableProcedure(384): Deleting some vestigial 1 rows of t1 from hbase:meta 2023-07-16 18:16:01,629 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(421): Removing 't1' descriptor. 2023-07-16 18:16:01,630 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(124): Deleting assignment state for pid=19, state=RUNNABLE:DELETE_TABLE_UNASSIGN_REGIONS, locked=true; DeleteTableProcedure table=t1 2023-07-16 18:16:01,630 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(411): Removing 't1' from region states. 2023-07-16 18:16:01,630 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"t1,,1689531360584.5e8419c1d565c20948145bc127a06bf3.","families":{"info":[{"qualifier":"","vlen":0,"tag":[],"timestamp":"1689531361630"}]},"ts":"9223372036854775807"} 2023-07-16 18:16:01,631 INFO [PEWorker-1] hbase.MetaTableAccessor(1788): Deleted 1 regions from META 2023-07-16 18:16:01,631 DEBUG [PEWorker-1] hbase.MetaTableAccessor(1789): Deleted regions: [{ENCODED => 5e8419c1d565c20948145bc127a06bf3, NAME => 't1,,1689531360584.5e8419c1d565c20948145bc127a06bf3.', STARTKEY => '', ENDKEY => ''}] 2023-07-16 18:16:01,631 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(415): Marking 't1' as deleted. 2023-07-16 18:16:01,632 DEBUG [PEWorker-1] hbase.MetaTableAccessor(2093): Delete {"totalColumns":1,"row":"t1","families":{"table":[{"qualifier":"state","vlen":0,"tag":[],"timestamp":"1689531361632"}]},"ts":"9223372036854775807"} 2023-07-16 18:16:01,633 INFO [PEWorker-1] hbase.MetaTableAccessor(1658): Deleted table t1 state from META 2023-07-16 18:16:01,634 DEBUG [PEWorker-1] procedure.DeleteTableProcedure(130): Finished pid=19, state=RUNNABLE:DELETE_TABLE_POST_OPERATION, locked=true; DeleteTableProcedure table=t1 2023-07-16 18:16:01,635 INFO [PEWorker-1] procedure2.ProcedureExecutor(1410): Finished pid=19, state=SUCCESS; DeleteTableProcedure table=t1 in 24 msec 2023-07-16 18:16:01,719 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] master.MasterRpcServices(1230): Checking to see if procedure is done pid=19 2023-07-16 18:16:01,719 INFO [Listener at localhost/42859] client.HBaseAdmin$TableFuture(3541): Operation: DELETE, Table Name: default:t1, procId: 19 completed 2023-07-16 18:16:01,723 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 18:16:01,723 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 18:16:01,723 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-16 18:16:01,724 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-16 18:16:01,724 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 18:16:01,724 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-16 18:16:01,724 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 18:16:01,725 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-16 18:16:01,728 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 18:16:01,728 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-16 18:16:01,733 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-16 18:16:01,735 INFO [Listener at localhost/42859] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-16 18:16:01,735 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-16 18:16:01,737 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 18:16:01,737 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 18:16:01,739 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-16 18:16:01,741 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-16 18:16:01,743 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 18:16:01,743 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 18:16:01,745 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:32899] to rsgroup master 2023-07-16 18:16:01,745 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:32899 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 18:16:01,745 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] ipc.CallRunner(144): callId: 105 service: MasterService methodName: ExecMasterService size: 118 connection: 172.31.14.131:52678 deadline: 1689532561745, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:32899 is either offline or it does not exist. 2023-07-16 18:16:01,746 WARN [Listener at localhost/42859] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:32899 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor57.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:32899 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-16 18:16:01,749 INFO [Listener at localhost/42859] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 18:16:01,750 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 18:16:01,750 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 18:16:01,750 INFO [Listener at localhost/42859] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35551, jenkins-hbase4.apache.org:37401, jenkins-hbase4.apache.org:43051, jenkins-hbase4.apache.org:46003], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-16 18:16:01,750 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-16 18:16:01,750 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 18:16:01,768 INFO [Listener at localhost/42859] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testNotMoveTableToNullRSGroupWhenCreatingExistingTable Thread=572 (was 559) - Thread LEAK? -, OpenFileDescriptor=857 (was 846) - OpenFileDescriptor LEAK? -, MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=349 (was 362), ProcessCount=171 (was 171), AvailableMemoryMB=4906 (was 4914) 2023-07-16 18:16:01,768 WARN [Listener at localhost/42859] hbase.ResourceChecker(130): Thread=572 is superior to 500 2023-07-16 18:16:01,785 INFO [Listener at localhost/42859] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testNonExistentTableMove Thread=572, OpenFileDescriptor=857, MaxFileDescriptor=60000, SystemLoadAverage=349, ProcessCount=171, AvailableMemoryMB=4906 2023-07-16 18:16:01,785 WARN [Listener at localhost/42859] hbase.ResourceChecker(130): Thread=572 is superior to 500 2023-07-16 18:16:01,785 INFO [Listener at localhost/42859] rsgroup.TestRSGroupsBase(132): testNonExistentTableMove 2023-07-16 18:16:01,788 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 18:16:01,789 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 18:16:01,790 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-16 18:16:01,790 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-16 18:16:01,790 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 18:16:01,790 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-16 18:16:01,790 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 18:16:01,791 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-16 18:16:01,794 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 18:16:01,794 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-16 18:16:01,796 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-16 18:16:01,798 INFO [Listener at localhost/42859] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-16 18:16:01,798 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-16 18:16:01,800 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 18:16:01,800 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 18:16:01,802 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-16 18:16:01,803 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-16 18:16:01,805 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 18:16:01,805 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 18:16:01,807 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:32899] to rsgroup master 2023-07-16 18:16:01,807 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:32899 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 18:16:01,807 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] ipc.CallRunner(144): callId: 133 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:52678 deadline: 1689532561807, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:32899 is either offline or it does not exist. 2023-07-16 18:16:01,808 WARN [Listener at localhost/42859] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:32899 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor57.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:32899 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-16 18:16:01,809 INFO [Listener at localhost/42859] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 18:16:01,810 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 18:16:01,810 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 18:16:01,810 INFO [Listener at localhost/42859] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35551, jenkins-hbase4.apache.org:37401, jenkins-hbase4.apache.org:43051, jenkins-hbase4.apache.org:46003], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-16 18:16:01,811 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-16 18:16:01,811 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 18:16:01,812 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestNonExistentTableMove 2023-07-16 18:16:01,812 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-16 18:16:01,813 INFO [Listener at localhost/42859] rsgroup.TestRSGroupsAdmin1(389): Moving table GrouptestNonExistentTableMove to default 2023-07-16 18:16:01,818 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(184): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, table=GrouptestNonExistentTableMove 2023-07-16 18:16:01,818 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfoOfTable 2023-07-16 18:16:01,821 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 18:16:01,821 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 18:16:01,822 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-16 18:16:01,822 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-16 18:16:01,822 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 18:16:01,822 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-16 18:16:01,823 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 18:16:01,823 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-16 18:16:01,826 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 18:16:01,826 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-16 18:16:01,829 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-16 18:16:01,831 INFO [Listener at localhost/42859] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-16 18:16:01,832 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-16 18:16:01,833 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 18:16:01,834 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 18:16:01,835 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-16 18:16:01,836 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-16 18:16:01,838 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 18:16:01,838 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 18:16:01,840 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:32899] to rsgroup master 2023-07-16 18:16:01,841 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:32899 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 18:16:01,841 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] ipc.CallRunner(144): callId: 168 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:52678 deadline: 1689532561840, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:32899 is either offline or it does not exist. 2023-07-16 18:16:01,841 WARN [Listener at localhost/42859] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:32899 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor57.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:32899 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-16 18:16:01,843 INFO [Listener at localhost/42859] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 18:16:01,844 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 18:16:01,844 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 18:16:01,844 INFO [Listener at localhost/42859] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35551, jenkins-hbase4.apache.org:37401, jenkins-hbase4.apache.org:43051, jenkins-hbase4.apache.org:46003], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-16 18:16:01,844 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-16 18:16:01,844 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 18:16:01,864 INFO [Listener at localhost/42859] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testNonExistentTableMove Thread=574 (was 572) - Thread LEAK? -, OpenFileDescriptor=857 (was 857), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=349 (was 349), ProcessCount=171 (was 171), AvailableMemoryMB=4906 (was 4906) 2023-07-16 18:16:01,864 WARN [Listener at localhost/42859] hbase.ResourceChecker(130): Thread=574 is superior to 500 2023-07-16 18:16:01,881 INFO [Listener at localhost/42859] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testGroupInfoMultiAccessing Thread=574, OpenFileDescriptor=857, MaxFileDescriptor=60000, SystemLoadAverage=349, ProcessCount=171, AvailableMemoryMB=4904 2023-07-16 18:16:01,881 WARN [Listener at localhost/42859] hbase.ResourceChecker(130): Thread=574 is superior to 500 2023-07-16 18:16:01,882 INFO [Listener at localhost/42859] rsgroup.TestRSGroupsBase(132): testGroupInfoMultiAccessing 2023-07-16 18:16:01,885 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 18:16:01,885 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 18:16:01,886 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-16 18:16:01,886 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-16 18:16:01,886 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 18:16:01,887 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-16 18:16:01,887 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 18:16:01,887 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-16 18:16:01,890 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 18:16:01,890 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-16 18:16:01,892 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-16 18:16:01,893 INFO [Listener at localhost/42859] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-16 18:16:01,894 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-16 18:16:01,895 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 18:16:01,896 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 18:16:01,898 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-16 18:16:01,900 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-16 18:16:01,902 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 18:16:01,902 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 18:16:01,903 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:32899] to rsgroup master 2023-07-16 18:16:01,904 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:32899 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 18:16:01,904 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] ipc.CallRunner(144): callId: 196 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:52678 deadline: 1689532561903, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:32899 is either offline or it does not exist. 2023-07-16 18:16:01,904 WARN [Listener at localhost/42859] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:32899 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor57.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:32899 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-16 18:16:01,906 INFO [Listener at localhost/42859] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 18:16:01,906 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 18:16:01,906 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 18:16:01,907 INFO [Listener at localhost/42859] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35551, jenkins-hbase4.apache.org:37401, jenkins-hbase4.apache.org:43051, jenkins-hbase4.apache.org:46003], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-16 18:16:01,907 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-16 18:16:01,907 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 18:16:01,910 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 18:16:01,910 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 18:16:01,911 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-16 18:16:01,911 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-16 18:16:01,911 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 18:16:01,912 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-16 18:16:01,912 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 18:16:01,913 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-16 18:16:01,916 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 18:16:01,916 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-16 18:16:01,920 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-16 18:16:01,922 INFO [Listener at localhost/42859] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-16 18:16:01,923 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-16 18:16:01,925 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 18:16:01,925 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 18:16:01,929 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-16 18:16:01,931 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-16 18:16:01,933 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 18:16:01,933 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 18:16:01,934 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:32899] to rsgroup master 2023-07-16 18:16:01,934 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:32899 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 18:16:01,935 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] ipc.CallRunner(144): callId: 224 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:52678 deadline: 1689532561934, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:32899 is either offline or it does not exist. 2023-07-16 18:16:01,935 WARN [Listener at localhost/42859] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:32899 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor57.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:32899 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-16 18:16:01,937 INFO [Listener at localhost/42859] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 18:16:01,937 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 18:16:01,937 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 18:16:01,937 INFO [Listener at localhost/42859] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35551, jenkins-hbase4.apache.org:37401, jenkins-hbase4.apache.org:43051, jenkins-hbase4.apache.org:46003], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-16 18:16:01,938 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-16 18:16:01,938 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 18:16:01,956 INFO [Listener at localhost/42859] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testGroupInfoMultiAccessing Thread=575 (was 574) - Thread LEAK? -, OpenFileDescriptor=857 (was 857), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=349 (was 349), ProcessCount=171 (was 171), AvailableMemoryMB=4905 (was 4904) - AvailableMemoryMB LEAK? - 2023-07-16 18:16:01,956 WARN [Listener at localhost/42859] hbase.ResourceChecker(130): Thread=575 is superior to 500 2023-07-16 18:16:01,973 INFO [Listener at localhost/42859] hbase.ResourceChecker(147): before: rsgroup.TestRSGroupsAdmin1#testNamespaceConstraint Thread=575, OpenFileDescriptor=857, MaxFileDescriptor=60000, SystemLoadAverage=349, ProcessCount=171, AvailableMemoryMB=4904 2023-07-16 18:16:01,973 WARN [Listener at localhost/42859] hbase.ResourceChecker(130): Thread=575 is superior to 500 2023-07-16 18:16:01,973 INFO [Listener at localhost/42859] rsgroup.TestRSGroupsBase(132): testNamespaceConstraint 2023-07-16 18:16:01,977 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 18:16:01,977 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 18:16:01,977 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-16 18:16:01,978 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-16 18:16:01,978 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 18:16:01,978 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-16 18:16:01,978 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 18:16:01,979 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-16 18:16:01,982 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 18:16:01,982 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-16 18:16:01,984 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-16 18:16:01,986 INFO [Listener at localhost/42859] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-16 18:16:01,987 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-16 18:16:01,988 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 18:16:01,989 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 18:16:01,990 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-16 18:16:01,992 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-16 18:16:01,994 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 18:16:01,994 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 18:16:01,996 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:32899] to rsgroup master 2023-07-16 18:16:01,996 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:32899 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 18:16:01,996 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] ipc.CallRunner(144): callId: 252 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:52678 deadline: 1689532561996, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:32899 is either offline or it does not exist. 2023-07-16 18:16:01,996 WARN [Listener at localhost/42859] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:32899 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor57.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.setUpBeforeMethod(TestRSGroupsBase.java:136) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.beforeMethod(TestRSGroupsAdmin1.java:86) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:32899 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-16 18:16:01,998 INFO [Listener at localhost/42859] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 18:16:01,998 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 18:16:01,998 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 18:16:01,999 INFO [Listener at localhost/42859] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35551, jenkins-hbase4.apache.org:37401, jenkins-hbase4.apache.org:43051, jenkins-hbase4.apache.org:46003], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-16 18:16:01,999 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-16 18:16:01,999 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 18:16:01,999 INFO [Listener at localhost/42859] rsgroup.TestRSGroupsAdmin1(154): testNamespaceConstraint 2023-07-16 18:16:02,000 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_foo 2023-07-16 18:16:02,002 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_foo 2023-07-16 18:16:02,003 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 18:16:02,004 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 18:16:02,004 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 6 2023-07-16 18:16:02,005 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-16 18:16:02,007 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 18:16:02,007 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 18:16:02,009 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] master.HMaster$15(3014): Client=jenkins//172.31.14.131 creating {NAME => 'Group_foo', hbase.rsgroup.name => 'Group_foo'} 2023-07-16 18:16:02,010 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] procedure2.ProcedureExecutor(1029): Stored pid=20, state=RUNNABLE:CREATE_NAMESPACE_PREPARE; CreateNamespaceProcedure, namespace=Group_foo 2023-07-16 18:16:02,013 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-16 18:16:02,016 DEBUG [Listener at localhost/42859-EventThread] zookeeper.ZKWatcher(600): master:32899-0x1016f591f660000, quorum=127.0.0.1:54881, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-16 18:16:02,020 INFO [PEWorker-3] procedure2.ProcedureExecutor(1410): Finished pid=20, state=SUCCESS; CreateNamespaceProcedure, namespace=Group_foo in 10 msec 2023-07-16 18:16:02,114 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] master.MasterRpcServices(1230): Checking to see if procedure is done pid=20 2023-07-16 18:16:02,115 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_foo 2023-07-16 18:16:02,116 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup Group_foo is referenced by namespace: Group_foo at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.removeRSGroup(RSGroupAdminServer.java:504) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.removeRSGroup(RSGroupAdminEndpoint.java:278) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16208) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 18:16:02,117 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] ipc.CallRunner(144): callId: 268 service: MasterService methodName: ExecMasterService size: 91 connection: 172.31.14.131:52678 deadline: 1689532562115, exception=org.apache.hadoop.hbase.constraint.ConstraintException: RSGroup Group_foo is referenced by namespace: Group_foo 2023-07-16 18:16:02,123 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] master.HMaster$16(3053): Client=jenkins//172.31.14.131 modify {NAME => 'Group_foo', hbase.rsgroup.name => 'Group_foo'} 2023-07-16 18:16:02,129 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] procedure2.ProcedureExecutor(1029): Stored pid=21, state=RUNNABLE:MODIFY_NAMESPACE_PREPARE; ModifyNamespaceProcedure, namespace=Group_foo 2023-07-16 18:16:02,135 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] master.MasterRpcServices(1230): Checking to see if procedure is done pid=21 2023-07-16 18:16:02,137 DEBUG [Listener at localhost/42859-EventThread] zookeeper.ZKWatcher(600): master:32899-0x1016f591f660000, quorum=127.0.0.1:54881, baseZNode=/hbase Received ZooKeeper Event, type=NodeDataChanged, state=SyncConnected, path=/hbase/namespace/Group_foo 2023-07-16 18:16:02,138 INFO [PEWorker-5] procedure2.ProcedureExecutor(1410): Finished pid=21, state=SUCCESS; ModifyNamespaceProcedure, namespace=Group_foo in 13 msec 2023-07-16 18:16:02,236 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] master.MasterRpcServices(1230): Checking to see if procedure is done pid=21 2023-07-16 18:16:02,237 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup Group_anotherGroup 2023-07-16 18:16:02,239 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_anotherGroup 2023-07-16 18:16:02,241 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 18:16:02,241 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_foo 2023-07-16 18:16:02,241 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 18:16:02,242 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 8 2023-07-16 18:16:02,247 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-16 18:16:02,249 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 18:16:02,249 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 18:16:02,251 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] master.HMaster$17(3086): Client=jenkins//172.31.14.131 delete Group_foo 2023-07-16 18:16:02,252 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] procedure2.ProcedureExecutor(1029): Stored pid=22, state=RUNNABLE:DELETE_NAMESPACE_PREPARE; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-16 18:16:02,253 INFO [PEWorker-4] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_PREPARE, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-16 18:16:02,255 INFO [PEWorker-4] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_DELETE_FROM_NS_TABLE, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-16 18:16:02,256 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] master.MasterRpcServices(1230): Checking to see if procedure is done pid=22 2023-07-16 18:16:02,257 INFO [PEWorker-4] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_FROM_ZK, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-16 18:16:02,258 DEBUG [Listener at localhost/42859-EventThread] zookeeper.ZKWatcher(600): master:32899-0x1016f591f660000, quorum=127.0.0.1:54881, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/namespace/Group_foo 2023-07-16 18:16:02,258 DEBUG [Listener at localhost/42859-EventThread] zookeeper.ZKWatcher(600): master:32899-0x1016f591f660000, quorum=127.0.0.1:54881, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/namespace 2023-07-16 18:16:02,259 INFO [PEWorker-4] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_DELETE_DIRECTORIES, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-16 18:16:02,261 INFO [PEWorker-4] procedure.DeleteNamespaceProcedure(73): pid=22, state=RUNNABLE:DELETE_NAMESPACE_REMOVE_NAMESPACE_QUOTA, locked=true; DeleteNamespaceProcedure, namespace=Group_foo 2023-07-16 18:16:02,262 INFO [PEWorker-4] procedure2.ProcedureExecutor(1410): Finished pid=22, state=SUCCESS; DeleteNamespaceProcedure, namespace=Group_foo in 10 msec 2023-07-16 18:16:02,357 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] master.MasterRpcServices(1230): Checking to see if procedure is done pid=22 2023-07-16 18:16:02,357 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_foo 2023-07-16 18:16:02,361 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/Group_anotherGroup 2023-07-16 18:16:02,361 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 18:16:02,362 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 18:16:02,362 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 7 2023-07-16 18:16:02,363 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-16 18:16:02,365 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 18:16:02,366 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 18:16:02,368 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Region server group foo does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint.preCreateNamespace(RSGroupAdminEndpoint.java:591) at org.apache.hadoop.hbase.master.MasterCoprocessorHost$1.call(MasterCoprocessorHost.java:222) at org.apache.hadoop.hbase.master.MasterCoprocessorHost$1.call(MasterCoprocessorHost.java:219) at org.apache.hadoop.hbase.coprocessor.CoprocessorHost$ObserverOperationWithoutResult.callObserver(CoprocessorHost.java:558) at org.apache.hadoop.hbase.coprocessor.CoprocessorHost.execOperation(CoprocessorHost.java:631) at org.apache.hadoop.hbase.master.MasterCoprocessorHost.preCreateNamespace(MasterCoprocessorHost.java:219) at org.apache.hadoop.hbase.master.HMaster$15.run(HMaster.java:3010) at org.apache.hadoop.hbase.master.procedure.MasterProcedureUtil.submitProcedure(MasterProcedureUtil.java:132) at org.apache.hadoop.hbase.master.HMaster.createNamespace(HMaster.java:3007) at org.apache.hadoop.hbase.master.MasterRpcServices.createNamespace(MasterRpcServices.java:684) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 18:16:02,368 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] ipc.CallRunner(144): callId: 290 service: MasterService methodName: CreateNamespace size: 70 connection: 172.31.14.131:52678 deadline: 1689531422367, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Region server group foo does not exist. 2023-07-16 18:16:02,371 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 18:16:02,371 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 18:16:02,372 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-16 18:16:02,372 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-16 18:16:02,372 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 18:16:02,372 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-16 18:16:02,373 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 18:16:02,373 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup Group_anotherGroup 2023-07-16 18:16:02,376 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 18:16:02,376 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 18:16:02,376 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 5 2023-07-16 18:16:02,378 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-16 18:16:02,378 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(231): Client=jenkins//172.31.14.131 move tables [] to rsgroup default 2023-07-16 18:16:02,378 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupAdminServer(448): moveTables() passed an empty set. Ignoring. 2023-07-16 18:16:02,378 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveTables 2023-07-16 18:16:02,379 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [] to rsgroup default 2023-07-16 18:16:02,379 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.MoveServers 2023-07-16 18:16:02,379 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(272): Client=jenkins//172.31.14.131 remove rsgroup master 2023-07-16 18:16:02,382 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 18:16:02,382 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 3 2023-07-16 18:16:02,384 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.RemoveRSGroup 2023-07-16 18:16:02,386 INFO [Listener at localhost/42859] rsgroup.TestRSGroupsBase(152): Restoring servers: 0 2023-07-16 18:16:02,387 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(252): Client=jenkins//172.31.14.131 add rsgroup master 2023-07-16 18:16:02,388 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/default 2023-07-16 18:16:02,388 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupInfoManagerImpl(662): Updating znode: /hbase/rsgroup/master 2023-07-16 18:16:02,390 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupInfoManagerImpl(668): Writing ZK GroupInfo count: 4 2023-07-16 18:16:02,391 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.AddRSGroup 2023-07-16 18:16:02,392 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 18:16:02,392 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 18:16:02,394 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(206): Client=jenkins//172.31.14.131 move servers [jenkins-hbase4.apache.org:32899] to rsgroup master 2023-07-16 18:16:02,394 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] ipc.MetricsHBaseServer(134): Unknown exception type org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:32899 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) 2023-07-16 18:16:02,394 DEBUG [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] ipc.CallRunner(144): callId: 320 service: MasterService methodName: ExecMasterService size: 119 connection: 172.31.14.131:52678 deadline: 1689532562394, exception=org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:32899 is either offline or it does not exist. 2023-07-16 18:16:02,395 WARN [Listener at localhost/42859] rsgroup.TestRSGroupsBase(163): Got this on setup, FYI org.apache.hadoop.hbase.constraint.ConstraintException: org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:32899 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at sun.reflect.GeneratedConstructorAccessor57.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:97) at org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:87) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:376) at org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.handleRemoteException(ProtobufUtil.java:364) at org.apache.hadoop.hbase.client.MasterCallable.call(MasterCallable.java:101) at org.apache.hadoop.hbase.client.HBaseAdmin$74.callExecService(HBaseAdmin.java:2985) at org.apache.hadoop.hbase.client.SyncCoprocessorRpcChannel.callBlockingMethod(SyncCoprocessorRpcChannel.java:62) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService$BlockingStub.moveServers(RSGroupAdminProtos.java:16658) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminClient.moveServers(RSGroupAdminClient.java:108) at org.apache.hadoop.hbase.rsgroup.VerifyingRSGroupAdminClient.moveServers(VerifyingRSGroupAdminClient.java:77) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase.tearDownAfterMethod(TestRSGroupsBase.java:161) at org.apache.hadoop.hbase.rsgroup.TestRSGroupsAdmin1.afterMethod(TestRSGroupsAdmin1.java:91) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.constraint.ConstraintException): org.apache.hadoop.hbase.constraint.ConstraintException: Server jenkins-hbase4.apache.org:32899 is either offline or it does not exist. at org.apache.hadoop.hbase.rsgroup.RSGroupAdminServer.moveServers(RSGroupAdminServer.java:408) at org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl.moveServers(RSGroupAdminEndpoint.java:213) at org.apache.hadoop.hbase.protobuf.generated.RSGroupAdminProtos$RSGroupAdminService.callMethod(RSGroupAdminProtos.java:16193) at org.apache.hadoop.hbase.master.MasterRpcServices.execMasterService(MasterRpcServices.java:900) at org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:387) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:132) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:369) at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:349) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:382) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:88) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:416) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:412) at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:115) at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:130) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.readResponse(NettyRpcDuplexHandler.java:162) at org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelRead(NettyRpcDuplexHandler.java:192) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) at org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) at org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650) at org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at org.apache.hbase.thirdparty.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) at org.apache.hbase.thirdparty.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ... 1 more 2023-07-16 18:16:02,396 INFO [Listener at localhost/42859] hbase.Waiter(180): Waiting up to [60,000] milli-secs(wait.for.ratio=[1]) 2023-07-16 18:16:02,397 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(316): Client=jenkins//172.31.14.131 list rsgroup 2023-07-16 18:16:02,397 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.ListRSGroupInfos 2023-07-16 18:16:02,397 INFO [Listener at localhost/42859] rsgroup.TestRSGroupsBase$2(169): Waiting for cleanup to finish [Name:default, Servers:[jenkins-hbase4.apache.org:35551, jenkins-hbase4.apache.org:37401, jenkins-hbase4.apache.org:43051, jenkins-hbase4.apache.org:46003], Tables:[hbase:meta, hbase:namespace, hbase:rsgroup], Configurations:{}, Name:master, Servers:[], Tables:[], Configurations:{}] 2023-07-16 18:16:02,398 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] rsgroup.RSGroupAdminEndpoint$RSGroupAdminServiceImpl(165): Client=jenkins//172.31.14.131 initiates rsgroup info retrieval, group=default 2023-07-16 18:16:02,398 INFO [RpcServer.default.FPBQ.Fifo.handler=2,queue=0,port=32899] master.MasterRpcServices(912): User jenkins (auth:SIMPLE) (remote address: /172.31.14.131) master service request for RSGroupAdminService.GetRSGroupInfo 2023-07-16 18:16:02,415 INFO [Listener at localhost/42859] hbase.ResourceChecker(175): after: rsgroup.TestRSGroupsAdmin1#testNamespaceConstraint Thread=575 (was 575), OpenFileDescriptor=857 (was 857), MaxFileDescriptor=60000 (was 60000), SystemLoadAverage=349 (was 349), ProcessCount=171 (was 171), AvailableMemoryMB=4903 (was 4904) 2023-07-16 18:16:02,415 WARN [Listener at localhost/42859] hbase.ResourceChecker(130): Thread=575 is superior to 500 2023-07-16 18:16:02,416 INFO [Listener at localhost/42859] hbase.HBaseTestingUtility(1286): Shutting down minicluster 2023-07-16 18:16:02,416 INFO [Listener at localhost/42859] client.ConnectionImplementation(1979): Closing master protocol: MasterService 2023-07-16 18:16:02,416 DEBUG [Listener at localhost/42859] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x5da743fd to 127.0.0.1:54881 2023-07-16 18:16:02,416 DEBUG [Listener at localhost/42859] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-16 18:16:02,416 DEBUG [Listener at localhost/42859] util.JVMClusterUtil(237): Shutting down HBase Cluster 2023-07-16 18:16:02,416 DEBUG [Listener at localhost/42859] util.JVMClusterUtil(257): Found active master hash=487293608, stopped=false 2023-07-16 18:16:02,416 DEBUG [Listener at localhost/42859] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.RSGroupAdminEndpoint 2023-07-16 18:16:02,416 DEBUG [Listener at localhost/42859] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.rsgroup.TestRSGroupsBase$CPMasterObserver 2023-07-16 18:16:02,416 INFO [Listener at localhost/42859] master.ServerManager(901): Cluster shutdown requested of master=jenkins-hbase4.apache.org,32899,1689531358331 2023-07-16 18:16:02,419 DEBUG [Listener at localhost/42859-EventThread] zookeeper.ZKWatcher(600): regionserver:43051-0x1016f591f660001, quorum=127.0.0.1:54881, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-16 18:16:02,419 DEBUG [Listener at localhost/42859-EventThread] zookeeper.ZKWatcher(600): regionserver:37401-0x1016f591f660002, quorum=127.0.0.1:54881, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-16 18:16:02,419 DEBUG [Listener at localhost/42859-EventThread] zookeeper.ZKWatcher(600): regionserver:46003-0x1016f591f66000b, quorum=127.0.0.1:54881, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-16 18:16:02,419 DEBUG [Listener at localhost/42859-EventThread] zookeeper.ZKWatcher(600): master:32899-0x1016f591f660000, quorum=127.0.0.1:54881, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-16 18:16:02,419 INFO [Listener at localhost/42859] procedure2.ProcedureExecutor(629): Stopping 2023-07-16 18:16:02,419 DEBUG [Listener at localhost/42859-EventThread] zookeeper.ZKWatcher(600): regionserver:35551-0x1016f591f660003, quorum=127.0.0.1:54881, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/running 2023-07-16 18:16:02,419 DEBUG [Listener at localhost/42859-EventThread] zookeeper.ZKWatcher(600): master:32899-0x1016f591f660000, quorum=127.0.0.1:54881, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-16 18:16:02,419 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:46003-0x1016f591f66000b, quorum=127.0.0.1:54881, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-16 18:16:02,419 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:43051-0x1016f591f660001, quorum=127.0.0.1:54881, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-16 18:16:02,420 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:37401-0x1016f591f660002, quorum=127.0.0.1:54881, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-16 18:16:02,420 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:32899-0x1016f591f660000, quorum=127.0.0.1:54881, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-16 18:16:02,420 DEBUG [Listener at localhost/42859] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x63fcf2f8 to 127.0.0.1:54881 2023-07-16 18:16:02,420 DEBUG [Listener at localhost/42859] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-16 18:16:02,420 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): regionserver:35551-0x1016f591f660003, quorum=127.0.0.1:54881, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/running 2023-07-16 18:16:02,420 INFO [Listener at localhost/42859] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,43051,1689531358519' ***** 2023-07-16 18:16:02,420 INFO [Listener at localhost/42859] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-16 18:16:02,420 INFO [RS:0;jenkins-hbase4:43051] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-16 18:16:02,421 INFO [Listener at localhost/42859] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,37401,1689531358669' ***** 2023-07-16 18:16:02,423 INFO [Listener at localhost/42859] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-16 18:16:02,424 INFO [Listener at localhost/42859] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,35551,1689531358817' ***** 2023-07-16 18:16:02,424 INFO [Listener at localhost/42859] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-16 18:16:02,424 INFO [Listener at localhost/42859] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,46003,1689531360224' ***** 2023-07-16 18:16:02,424 INFO [Listener at localhost/42859] regionserver.HRegionServer(2311): STOPPED: Shutdown requested 2023-07-16 18:16:02,424 INFO [RS:2;jenkins-hbase4:35551] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-16 18:16:02,425 INFO [RS:1;jenkins-hbase4:37401] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-16 18:16:02,425 INFO [RS:3;jenkins-hbase4:46003] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-16 18:16:02,428 INFO [RS:0;jenkins-hbase4:43051] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@736fc548{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-16 18:16:02,430 INFO [RS:0;jenkins-hbase4:43051] server.AbstractConnector(383): Stopped ServerConnector@5f6ec059{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-16 18:16:02,430 INFO [RS:0;jenkins-hbase4:43051] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-16 18:16:02,430 INFO [RS:1;jenkins-hbase4:37401] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@7de77cdd{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-16 18:16:02,430 INFO [RS:2;jenkins-hbase4:35551] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@656282{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-16 18:16:02,431 INFO [RS:3;jenkins-hbase4:46003] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@7d5d5f98{regionserver,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/regionserver} 2023-07-16 18:16:02,432 INFO [RS:2;jenkins-hbase4:35551] server.AbstractConnector(383): Stopped ServerConnector@e9426d4{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-16 18:16:02,431 INFO [RS:0;jenkins-hbase4:43051] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@595b7975{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-16 18:16:02,432 INFO [RS:3;jenkins-hbase4:46003] server.AbstractConnector(383): Stopped ServerConnector@c24108f{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-16 18:16:02,432 INFO [RS:2;jenkins-hbase4:35551] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-16 18:16:02,432 INFO [RS:1;jenkins-hbase4:37401] server.AbstractConnector(383): Stopped ServerConnector@701fd4b6{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-16 18:16:02,432 INFO [RS:0;jenkins-hbase4:43051] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@2a4c265e{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5f6dd96a-5549-784d-879e-773f11a8ed4c/hadoop.log.dir/,STOPPED} 2023-07-16 18:16:02,432 INFO [RS:3;jenkins-hbase4:46003] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-16 18:16:02,433 INFO [RS:2;jenkins-hbase4:35551] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@7d2ac34c{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-16 18:16:02,433 INFO [RS:1;jenkins-hbase4:37401] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-16 18:16:02,435 INFO [RS:3;jenkins-hbase4:46003] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@461e4fbf{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-16 18:16:02,435 INFO [RS:0;jenkins-hbase4:43051] regionserver.HeapMemoryManager(220): Stopping 2023-07-16 18:16:02,435 INFO [RS:2;jenkins-hbase4:35551] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@451ae1e1{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5f6dd96a-5549-784d-879e-773f11a8ed4c/hadoop.log.dir/,STOPPED} 2023-07-16 18:16:02,436 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-16 18:16:02,436 INFO [RS:3;jenkins-hbase4:46003] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@461748f3{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5f6dd96a-5549-784d-879e-773f11a8ed4c/hadoop.log.dir/,STOPPED} 2023-07-16 18:16:02,436 INFO [RS:3;jenkins-hbase4:46003] regionserver.HeapMemoryManager(220): Stopping 2023-07-16 18:16:02,436 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-16 18:16:02,436 INFO [RS:3;jenkins-hbase4:46003] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-16 18:16:02,436 INFO [RS:3;jenkins-hbase4:46003] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-16 18:16:02,437 INFO [RS:3;jenkins-hbase4:46003] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,46003,1689531360224 2023-07-16 18:16:02,437 DEBUG [RS:3;jenkins-hbase4:46003] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x632a472d to 127.0.0.1:54881 2023-07-16 18:16:02,437 DEBUG [RS:3;jenkins-hbase4:46003] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-16 18:16:02,437 INFO [RS:3;jenkins-hbase4:46003] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,46003,1689531360224; all regions closed. 2023-07-16 18:16:02,443 INFO [RS:0;jenkins-hbase4:43051] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-16 18:16:02,443 INFO [RS:0;jenkins-hbase4:43051] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-16 18:16:02,443 INFO [RS:0;jenkins-hbase4:43051] regionserver.HRegionServer(3305): Received CLOSE for 5e547c590a8a20119ab4e8cece71317b 2023-07-16 18:16:02,443 INFO [RS:0;jenkins-hbase4:43051] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,43051,1689531358519 2023-07-16 18:16:02,443 DEBUG [RS:0;jenkins-hbase4:43051] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x6e985cec to 127.0.0.1:54881 2023-07-16 18:16:02,443 DEBUG [RS:0;jenkins-hbase4:43051] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-16 18:16:02,443 INFO [RS:0;jenkins-hbase4:43051] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-16 18:16:02,443 DEBUG [RS:0;jenkins-hbase4:43051] regionserver.HRegionServer(1478): Online Regions={5e547c590a8a20119ab4e8cece71317b=hbase:namespace,,1689531359735.5e547c590a8a20119ab4e8cece71317b.} 2023-07-16 18:16:02,444 DEBUG [RS:0;jenkins-hbase4:43051] regionserver.HRegionServer(1504): Waiting on 5e547c590a8a20119ab4e8cece71317b 2023-07-16 18:16:02,444 INFO [RS:2;jenkins-hbase4:35551] regionserver.HeapMemoryManager(220): Stopping 2023-07-16 18:16:02,444 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 5e547c590a8a20119ab4e8cece71317b, disabling compactions & flushes 2023-07-16 18:16:02,445 INFO [RS:2;jenkins-hbase4:35551] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-16 18:16:02,444 INFO [RS:1;jenkins-hbase4:37401] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@43f5f68f{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-16 18:16:02,445 INFO [RS:2;jenkins-hbase4:35551] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-16 18:16:02,445 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:namespace,,1689531359735.5e547c590a8a20119ab4e8cece71317b. 2023-07-16 18:16:02,446 INFO [RS:2;jenkins-hbase4:35551] regionserver.HRegionServer(3305): Received CLOSE for c09408583a4b3a7e05655ca4c086b84f 2023-07-16 18:16:02,445 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-16 18:16:02,446 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:namespace,,1689531359735.5e547c590a8a20119ab4e8cece71317b. 2023-07-16 18:16:02,446 INFO [RS:1;jenkins-hbase4:37401] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@6e117f46{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5f6dd96a-5549-784d-879e-773f11a8ed4c/hadoop.log.dir/,STOPPED} 2023-07-16 18:16:02,446 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:namespace,,1689531359735.5e547c590a8a20119ab4e8cece71317b. after waiting 0 ms 2023-07-16 18:16:02,446 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:namespace,,1689531359735.5e547c590a8a20119ab4e8cece71317b. 2023-07-16 18:16:02,446 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 5e547c590a8a20119ab4e8cece71317b 1/1 column families, dataSize=267 B heapSize=904 B 2023-07-16 18:16:02,446 INFO [RS:2;jenkins-hbase4:35551] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,35551,1689531358817 2023-07-16 18:16:02,446 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing c09408583a4b3a7e05655ca4c086b84f, disabling compactions & flushes 2023-07-16 18:16:02,446 DEBUG [RS:2;jenkins-hbase4:35551] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x399d1e8b to 127.0.0.1:54881 2023-07-16 18:16:02,446 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:rsgroup,,1689531359722.c09408583a4b3a7e05655ca4c086b84f. 2023-07-16 18:16:02,446 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:rsgroup,,1689531359722.c09408583a4b3a7e05655ca4c086b84f. 2023-07-16 18:16:02,447 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:rsgroup,,1689531359722.c09408583a4b3a7e05655ca4c086b84f. after waiting 0 ms 2023-07-16 18:16:02,447 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:rsgroup,,1689531359722.c09408583a4b3a7e05655ca4c086b84f. 2023-07-16 18:16:02,446 DEBUG [RS:2;jenkins-hbase4:35551] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-16 18:16:02,447 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing c09408583a4b3a7e05655ca4c086b84f 1/1 column families, dataSize=6.43 KB heapSize=10.63 KB 2023-07-16 18:16:02,447 INFO [RS:2;jenkins-hbase4:35551] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-16 18:16:02,448 DEBUG [RS:2;jenkins-hbase4:35551] regionserver.HRegionServer(1478): Online Regions={c09408583a4b3a7e05655ca4c086b84f=hbase:rsgroup,,1689531359722.c09408583a4b3a7e05655ca4c086b84f.} 2023-07-16 18:16:02,448 DEBUG [RS:2;jenkins-hbase4:35551] regionserver.HRegionServer(1504): Waiting on c09408583a4b3a7e05655ca4c086b84f 2023-07-16 18:16:02,449 INFO [RS:1;jenkins-hbase4:37401] regionserver.HeapMemoryManager(220): Stopping 2023-07-16 18:16:02,449 INFO [MemStoreFlusher.0] regionserver.MemStoreFlusher$FlushHandler(361): MemStoreFlusher.0 exiting 2023-07-16 18:16:02,449 INFO [RS:1;jenkins-hbase4:37401] flush.RegionServerFlushTableProcedureManager(117): Stopping region server flush procedure manager gracefully. 2023-07-16 18:16:02,449 DEBUG [RS:3;jenkins-hbase4:46003] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/46b660d4-bb55-7072-2fb4-691f81782058/oldWALs 2023-07-16 18:16:02,449 INFO [RS:1;jenkins-hbase4:37401] snapshot.RegionServerSnapshotManager(137): Stopping RegionServerSnapshotManager gracefully. 2023-07-16 18:16:02,449 INFO [RS:3;jenkins-hbase4:46003] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C46003%2C1689531360224:(num 1689531360541) 2023-07-16 18:16:02,449 DEBUG [RS:3;jenkins-hbase4:46003] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-16 18:16:02,449 INFO [RS:1;jenkins-hbase4:37401] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,37401,1689531358669 2023-07-16 18:16:02,449 INFO [RS:3;jenkins-hbase4:46003] regionserver.LeaseManager(133): Closed leases 2023-07-16 18:16:02,449 DEBUG [RS:1;jenkins-hbase4:37401] zookeeper.ReadOnlyZKClient(361): Close zookeeper connection 0x7c93a891 to 127.0.0.1:54881 2023-07-16 18:16:02,449 DEBUG [RS:1;jenkins-hbase4:37401] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-16 18:16:02,449 INFO [RS:1;jenkins-hbase4:37401] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-16 18:16:02,449 INFO [RS:1;jenkins-hbase4:37401] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-16 18:16:02,449 INFO [RS:1;jenkins-hbase4:37401] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-16 18:16:02,449 INFO [RS:1;jenkins-hbase4:37401] regionserver.HRegionServer(3305): Received CLOSE for 1588230740 2023-07-16 18:16:02,449 INFO [RS:3;jenkins-hbase4:46003] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-16 18:16:02,451 INFO [RS:1;jenkins-hbase4:37401] regionserver.HRegionServer(1474): Waiting on 1 regions to close 2023-07-16 18:16:02,451 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-16 18:16:02,451 DEBUG [RS:1;jenkins-hbase4:37401] regionserver.HRegionServer(1478): Online Regions={1588230740=hbase:meta,,1.1588230740} 2023-07-16 18:16:02,451 DEBUG [RS:1;jenkins-hbase4:37401] regionserver.HRegionServer(1504): Waiting on 1588230740 2023-07-16 18:16:02,451 INFO [RS:3;jenkins-hbase4:46003] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-16 18:16:02,451 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1604): Closing 1588230740, disabling compactions & flushes 2023-07-16 18:16:02,451 INFO [RS:3;jenkins-hbase4:46003] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-16 18:16:02,452 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1626): Closing region hbase:meta,,1.1588230740 2023-07-16 18:16:02,452 INFO [RS:3;jenkins-hbase4:46003] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-16 18:16:02,452 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1647): Waiting without time limit for close lock on hbase:meta,,1.1588230740 2023-07-16 18:16:02,452 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1714): Acquired close lock on hbase:meta,,1.1588230740 after waiting 0 ms 2023-07-16 18:16:02,452 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1724): Updates disabled for region hbase:meta,,1.1588230740 2023-07-16 18:16:02,452 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2745): Flushing 1588230740 3/3 column families, dataSize=4.51 KB heapSize=8.81 KB 2023-07-16 18:16:02,453 INFO [RS:3;jenkins-hbase4:46003] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:46003 2023-07-16 18:16:02,455 DEBUG [Listener at localhost/42859-EventThread] zookeeper.ZKWatcher(600): regionserver:43051-0x1016f591f660001, quorum=127.0.0.1:54881, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,46003,1689531360224 2023-07-16 18:16:02,455 DEBUG [Listener at localhost/42859-EventThread] zookeeper.ZKWatcher(600): regionserver:43051-0x1016f591f660001, quorum=127.0.0.1:54881, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-16 18:16:02,455 DEBUG [Listener at localhost/42859-EventThread] zookeeper.ZKWatcher(600): master:32899-0x1016f591f660000, quorum=127.0.0.1:54881, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-16 18:16:02,455 DEBUG [Listener at localhost/42859-EventThread] zookeeper.ZKWatcher(600): regionserver:46003-0x1016f591f66000b, quorum=127.0.0.1:54881, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,46003,1689531360224 2023-07-16 18:16:02,455 DEBUG [Listener at localhost/42859-EventThread] zookeeper.ZKWatcher(600): regionserver:46003-0x1016f591f66000b, quorum=127.0.0.1:54881, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-16 18:16:02,455 DEBUG [Listener at localhost/42859-EventThread] zookeeper.ZKWatcher(600): regionserver:37401-0x1016f591f660002, quorum=127.0.0.1:54881, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,46003,1689531360224 2023-07-16 18:16:02,455 DEBUG [Listener at localhost/42859-EventThread] zookeeper.ZKWatcher(600): regionserver:37401-0x1016f591f660002, quorum=127.0.0.1:54881, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-16 18:16:02,455 DEBUG [Listener at localhost/42859-EventThread] zookeeper.ZKWatcher(600): regionserver:35551-0x1016f591f660003, quorum=127.0.0.1:54881, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,46003,1689531360224 2023-07-16 18:16:02,455 DEBUG [Listener at localhost/42859-EventThread] zookeeper.ZKWatcher(600): regionserver:35551-0x1016f591f660003, quorum=127.0.0.1:54881, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-16 18:16:02,456 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,46003,1689531360224] 2023-07-16 18:16:02,456 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,46003,1689531360224; numProcessing=1 2023-07-16 18:16:02,460 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,46003,1689531360224 already deleted, retry=false 2023-07-16 18:16:02,460 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,46003,1689531360224 expired; onlineServers=3 2023-07-16 18:16:02,461 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-16 18:16:02,486 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=267 B at sequenceid=9 (bloomFilter=true), to=hdfs://localhost:39679/user/jenkins/test-data/46b660d4-bb55-7072-2fb4-691f81782058/data/hbase/namespace/5e547c590a8a20119ab4e8cece71317b/.tmp/info/cd82b9d0704646de97c389e110650753 2023-07-16 18:16:02,486 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=6.43 KB at sequenceid=29 (bloomFilter=true), to=hdfs://localhost:39679/user/jenkins/test-data/46b660d4-bb55-7072-2fb4-691f81782058/data/hbase/rsgroup/c09408583a4b3a7e05655ca4c086b84f/.tmp/m/8f79e448213345e38260d6ad755b6912 2023-07-16 18:16:02,491 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=4.01 KB at sequenceid=26 (bloomFilter=false), to=hdfs://localhost:39679/user/jenkins/test-data/46b660d4-bb55-7072-2fb4-691f81782058/data/hbase/meta/1588230740/.tmp/info/56278d00d2b54f9eb7a09b4ce2238a42 2023-07-16 18:16:02,495 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for cd82b9d0704646de97c389e110650753 2023-07-16 18:16:02,496 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:39679/user/jenkins/test-data/46b660d4-bb55-7072-2fb4-691f81782058/data/hbase/namespace/5e547c590a8a20119ab4e8cece71317b/.tmp/info/cd82b9d0704646de97c389e110650753 as hdfs://localhost:39679/user/jenkins/test-data/46b660d4-bb55-7072-2fb4-691f81782058/data/hbase/namespace/5e547c590a8a20119ab4e8cece71317b/info/cd82b9d0704646de97c389e110650753 2023-07-16 18:16:02,497 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 8f79e448213345e38260d6ad755b6912 2023-07-16 18:16:02,497 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:39679/user/jenkins/test-data/46b660d4-bb55-7072-2fb4-691f81782058/data/hbase/rsgroup/c09408583a4b3a7e05655ca4c086b84f/.tmp/m/8f79e448213345e38260d6ad755b6912 as hdfs://localhost:39679/user/jenkins/test-data/46b660d4-bb55-7072-2fb4-691f81782058/data/hbase/rsgroup/c09408583a4b3a7e05655ca4c086b84f/m/8f79e448213345e38260d6ad755b6912 2023-07-16 18:16:02,503 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 56278d00d2b54f9eb7a09b4ce2238a42 2023-07-16 18:16:02,506 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-16 18:16:02,509 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 8f79e448213345e38260d6ad755b6912 2023-07-16 18:16:02,509 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:39679/user/jenkins/test-data/46b660d4-bb55-7072-2fb4-691f81782058/data/hbase/rsgroup/c09408583a4b3a7e05655ca4c086b84f/m/8f79e448213345e38260d6ad755b6912, entries=12, sequenceid=29, filesize=5.4 K 2023-07-16 18:16:02,510 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~6.43 KB/6586, heapSize ~10.61 KB/10864, currentSize=0 B/0 for c09408583a4b3a7e05655ca4c086b84f in 63ms, sequenceid=29, compaction requested=false 2023-07-16 18:16:02,520 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-16 18:16:02,521 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for cd82b9d0704646de97c389e110650753 2023-07-16 18:16:02,521 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:39679/user/jenkins/test-data/46b660d4-bb55-7072-2fb4-691f81782058/data/hbase/namespace/5e547c590a8a20119ab4e8cece71317b/info/cd82b9d0704646de97c389e110650753, entries=3, sequenceid=9, filesize=5.0 K 2023-07-16 18:16:02,522 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~267 B/267, heapSize ~888 B/888, currentSize=0 B/0 for 5e547c590a8a20119ab4e8cece71317b in 76ms, sequenceid=9, compaction requested=false 2023-07-16 18:16:02,522 INFO [regionserver/jenkins-hbase4:0.leaseChecker] regionserver.LeaseManager(133): Closed leases 2023-07-16 18:16:02,528 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:39679/user/jenkins/test-data/46b660d4-bb55-7072-2fb4-691f81782058/data/hbase/rsgroup/c09408583a4b3a7e05655ca4c086b84f/recovered.edits/32.seqid, newMaxSeqId=32, maxSeqId=1 2023-07-16 18:16:02,529 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-16 18:16:02,529 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:rsgroup,,1689531359722.c09408583a4b3a7e05655ca4c086b84f. 2023-07-16 18:16:02,529 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for c09408583a4b3a7e05655ca4c086b84f: 2023-07-16 18:16:02,529 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:rsgroup,,1689531359722.c09408583a4b3a7e05655ca4c086b84f. 2023-07-16 18:16:02,533 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:39679/user/jenkins/test-data/46b660d4-bb55-7072-2fb4-691f81782058/data/hbase/namespace/5e547c590a8a20119ab4e8cece71317b/recovered.edits/12.seqid, newMaxSeqId=12, maxSeqId=1 2023-07-16 18:16:02,534 INFO [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:namespace,,1689531359735.5e547c590a8a20119ab4e8cece71317b. 2023-07-16 18:16:02,534 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 5e547c590a8a20119ab4e8cece71317b: 2023-07-16 18:16:02,534 DEBUG [RS_CLOSE_REGION-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:namespace,,1689531359735.5e547c590a8a20119ab4e8cece71317b. 2023-07-16 18:16:02,539 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=82 B at sequenceid=26 (bloomFilter=false), to=hdfs://localhost:39679/user/jenkins/test-data/46b660d4-bb55-7072-2fb4-691f81782058/data/hbase/meta/1588230740/.tmp/rep_barrier/9efff9e7e36444d9a4c1ed099e1cb5b6 2023-07-16 18:16:02,544 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 9efff9e7e36444d9a4c1ed099e1cb5b6 2023-07-16 18:16:02,555 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=428 B at sequenceid=26 (bloomFilter=false), to=hdfs://localhost:39679/user/jenkins/test-data/46b660d4-bb55-7072-2fb4-691f81782058/data/hbase/meta/1588230740/.tmp/table/7b24300c0a204057bd63b0f535719083 2023-07-16 18:16:02,560 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 7b24300c0a204057bd63b0f535719083 2023-07-16 18:16:02,561 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:39679/user/jenkins/test-data/46b660d4-bb55-7072-2fb4-691f81782058/data/hbase/meta/1588230740/.tmp/info/56278d00d2b54f9eb7a09b4ce2238a42 as hdfs://localhost:39679/user/jenkins/test-data/46b660d4-bb55-7072-2fb4-691f81782058/data/hbase/meta/1588230740/info/56278d00d2b54f9eb7a09b4ce2238a42 2023-07-16 18:16:02,566 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 56278d00d2b54f9eb7a09b4ce2238a42 2023-07-16 18:16:02,567 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:39679/user/jenkins/test-data/46b660d4-bb55-7072-2fb4-691f81782058/data/hbase/meta/1588230740/info/56278d00d2b54f9eb7a09b4ce2238a42, entries=22, sequenceid=26, filesize=7.3 K 2023-07-16 18:16:02,567 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:39679/user/jenkins/test-data/46b660d4-bb55-7072-2fb4-691f81782058/data/hbase/meta/1588230740/.tmp/rep_barrier/9efff9e7e36444d9a4c1ed099e1cb5b6 as hdfs://localhost:39679/user/jenkins/test-data/46b660d4-bb55-7072-2fb4-691f81782058/data/hbase/meta/1588230740/rep_barrier/9efff9e7e36444d9a4c1ed099e1cb5b6 2023-07-16 18:16:02,573 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 9efff9e7e36444d9a4c1ed099e1cb5b6 2023-07-16 18:16:02,573 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:39679/user/jenkins/test-data/46b660d4-bb55-7072-2fb4-691f81782058/data/hbase/meta/1588230740/rep_barrier/9efff9e7e36444d9a4c1ed099e1cb5b6, entries=1, sequenceid=26, filesize=4.9 K 2023-07-16 18:16:02,574 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:39679/user/jenkins/test-data/46b660d4-bb55-7072-2fb4-691f81782058/data/hbase/meta/1588230740/.tmp/table/7b24300c0a204057bd63b0f535719083 as hdfs://localhost:39679/user/jenkins/test-data/46b660d4-bb55-7072-2fb4-691f81782058/data/hbase/meta/1588230740/table/7b24300c0a204057bd63b0f535719083 2023-07-16 18:16:02,580 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.StoreFileReader(520): Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 7b24300c0a204057bd63b0f535719083 2023-07-16 18:16:02,580 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HStore(1080): Added hdfs://localhost:39679/user/jenkins/test-data/46b660d4-bb55-7072-2fb4-691f81782058/data/hbase/meta/1588230740/table/7b24300c0a204057bd63b0f535719083, entries=6, sequenceid=26, filesize=5.1 K 2023-07-16 18:16:02,581 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(2948): Finished flush of dataSize ~4.51 KB/4614, heapSize ~8.77 KB/8976, currentSize=0 B/0 for 1588230740 in 129ms, sequenceid=26, compaction requested=false 2023-07-16 18:16:02,592 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] wal.WALSplitUtil(408): Wrote file=hdfs://localhost:39679/user/jenkins/test-data/46b660d4-bb55-7072-2fb4-691f81782058/data/hbase/meta/1588230740/recovered.edits/29.seqid, newMaxSeqId=29, maxSeqId=1 2023-07-16 18:16:02,592 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] coprocessor.CoprocessorHost(310): Stop coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint 2023-07-16 18:16:02,592 INFO [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1838): Closed hbase:meta,,1.1588230740 2023-07-16 18:16:02,592 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] regionserver.HRegion(1558): Region close journal for 1588230740: 2023-07-16 18:16:02,593 DEBUG [RS_CLOSE_META-regionserver/jenkins-hbase4:0-0] handler.CloseRegionHandler(117): Closed hbase:meta,,1.1588230740 2023-07-16 18:16:02,618 DEBUG [Listener at localhost/42859-EventThread] zookeeper.ZKWatcher(600): regionserver:46003-0x1016f591f66000b, quorum=127.0.0.1:54881, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-16 18:16:02,618 INFO [RS:3;jenkins-hbase4:46003] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,46003,1689531360224; zookeeper connection closed. 2023-07-16 18:16:02,618 DEBUG [Listener at localhost/42859-EventThread] zookeeper.ZKWatcher(600): regionserver:46003-0x1016f591f66000b, quorum=127.0.0.1:54881, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-16 18:16:02,618 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@6459cb9b] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@6459cb9b 2023-07-16 18:16:02,644 INFO [RS:0;jenkins-hbase4:43051] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,43051,1689531358519; all regions closed. 2023-07-16 18:16:02,648 INFO [RS:2;jenkins-hbase4:35551] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,35551,1689531358817; all regions closed. 2023-07-16 18:16:02,650 DEBUG [RS:0;jenkins-hbase4:43051] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/46b660d4-bb55-7072-2fb4-691f81782058/oldWALs 2023-07-16 18:16:02,650 INFO [RS:0;jenkins-hbase4:43051] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C43051%2C1689531358519:(num 1689531359485) 2023-07-16 18:16:02,650 DEBUG [RS:0;jenkins-hbase4:43051] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-16 18:16:02,650 INFO [RS:0;jenkins-hbase4:43051] regionserver.LeaseManager(133): Closed leases 2023-07-16 18:16:02,650 INFO [RS:0;jenkins-hbase4:43051] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-16 18:16:02,650 INFO [RS:0;jenkins-hbase4:43051] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-16 18:16:02,650 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-16 18:16:02,650 INFO [RS:0;jenkins-hbase4:43051] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-16 18:16:02,651 INFO [RS:0;jenkins-hbase4:43051] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-16 18:16:02,652 INFO [RS:1;jenkins-hbase4:37401] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,37401,1689531358669; all regions closed. 2023-07-16 18:16:02,652 INFO [RS:0;jenkins-hbase4:43051] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:43051 2023-07-16 18:16:02,653 WARN [Close-WAL-Writer-0] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(641): complete file /user/jenkins/test-data/46b660d4-bb55-7072-2fb4-691f81782058/WALs/jenkins-hbase4.apache.org,35551,1689531358817/jenkins-hbase4.apache.org%2C35551%2C1689531358817.1689531359482 not finished, retry = 0 2023-07-16 18:16:02,655 DEBUG [Listener at localhost/42859-EventThread] zookeeper.ZKWatcher(600): regionserver:37401-0x1016f591f660002, quorum=127.0.0.1:54881, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,43051,1689531358519 2023-07-16 18:16:02,655 DEBUG [Listener at localhost/42859-EventThread] zookeeper.ZKWatcher(600): regionserver:43051-0x1016f591f660001, quorum=127.0.0.1:54881, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,43051,1689531358519 2023-07-16 18:16:02,655 DEBUG [Listener at localhost/42859-EventThread] zookeeper.ZKWatcher(600): master:32899-0x1016f591f660000, quorum=127.0.0.1:54881, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-16 18:16:02,655 DEBUG [Listener at localhost/42859-EventThread] zookeeper.ZKWatcher(600): regionserver:35551-0x1016f591f660003, quorum=127.0.0.1:54881, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,43051,1689531358519 2023-07-16 18:16:02,657 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,43051,1689531358519] 2023-07-16 18:16:02,657 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,43051,1689531358519; numProcessing=2 2023-07-16 18:16:02,659 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,43051,1689531358519 already deleted, retry=false 2023-07-16 18:16:02,659 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,43051,1689531358519 expired; onlineServers=2 2023-07-16 18:16:02,661 WARN [Close-WAL-Writer-0] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(641): complete file /user/jenkins/test-data/46b660d4-bb55-7072-2fb4-691f81782058/WALs/jenkins-hbase4.apache.org,37401,1689531358669/jenkins-hbase4.apache.org%2C37401%2C1689531358669.meta.1689531359673.meta not finished, retry = 0 2023-07-16 18:16:02,756 DEBUG [RS:2;jenkins-hbase4:35551] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/46b660d4-bb55-7072-2fb4-691f81782058/oldWALs 2023-07-16 18:16:02,756 INFO [RS:2;jenkins-hbase4:35551] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C35551%2C1689531358817:(num 1689531359482) 2023-07-16 18:16:02,756 DEBUG [RS:2;jenkins-hbase4:35551] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-16 18:16:02,756 INFO [RS:2;jenkins-hbase4:35551] regionserver.LeaseManager(133): Closed leases 2023-07-16 18:16:02,756 INFO [RS:2;jenkins-hbase4:35551] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS, ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS] on shutdown 2023-07-16 18:16:02,756 INFO [RS:2;jenkins-hbase4:35551] regionserver.CompactSplit(434): Waiting for Split Thread to finish... 2023-07-16 18:16:02,756 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-16 18:16:02,756 INFO [RS:2;jenkins-hbase4:35551] regionserver.CompactSplit(434): Waiting for Large Compaction Thread to finish... 2023-07-16 18:16:02,756 INFO [RS:2;jenkins-hbase4:35551] regionserver.CompactSplit(434): Waiting for Small Compaction Thread to finish... 2023-07-16 18:16:02,758 INFO [RS:2;jenkins-hbase4:35551] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:35551 2023-07-16 18:16:02,760 DEBUG [Listener at localhost/42859-EventThread] zookeeper.ZKWatcher(600): regionserver:35551-0x1016f591f660003, quorum=127.0.0.1:54881, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,35551,1689531358817 2023-07-16 18:16:02,760 DEBUG [Listener at localhost/42859-EventThread] zookeeper.ZKWatcher(600): master:32899-0x1016f591f660000, quorum=127.0.0.1:54881, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-16 18:16:02,760 DEBUG [Listener at localhost/42859-EventThread] zookeeper.ZKWatcher(600): regionserver:37401-0x1016f591f660002, quorum=127.0.0.1:54881, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,35551,1689531358817 2023-07-16 18:16:02,762 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,35551,1689531358817] 2023-07-16 18:16:02,762 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,35551,1689531358817; numProcessing=3 2023-07-16 18:16:02,763 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,35551,1689531358817 already deleted, retry=false 2023-07-16 18:16:02,763 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,35551,1689531358817 expired; onlineServers=1 2023-07-16 18:16:02,764 DEBUG [RS:1;jenkins-hbase4:37401] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/46b660d4-bb55-7072-2fb4-691f81782058/oldWALs 2023-07-16 18:16:02,764 INFO [RS:1;jenkins-hbase4:37401] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C37401%2C1689531358669.meta:.meta(num 1689531359673) 2023-07-16 18:16:02,767 WARN [Close-WAL-Writer-0] asyncfs.FanOutOneBlockAsyncDFSOutputHelper(641): complete file /user/jenkins/test-data/46b660d4-bb55-7072-2fb4-691f81782058/WALs/jenkins-hbase4.apache.org,37401,1689531358669/jenkins-hbase4.apache.org%2C37401%2C1689531358669.1689531359463 not finished, retry = 0 2023-07-16 18:16:02,870 DEBUG [RS:1;jenkins-hbase4:37401] wal.AbstractFSWAL(1028): Moved 1 WAL file(s) to /user/jenkins/test-data/46b660d4-bb55-7072-2fb4-691f81782058/oldWALs 2023-07-16 18:16:02,870 INFO [RS:1;jenkins-hbase4:37401] wal.AbstractFSWAL(1031): Closed WAL: AsyncFSWAL jenkins-hbase4.apache.org%2C37401%2C1689531358669:(num 1689531359463) 2023-07-16 18:16:02,870 DEBUG [RS:1;jenkins-hbase4:37401] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-16 18:16:02,870 INFO [RS:1;jenkins-hbase4:37401] regionserver.LeaseManager(133): Closed leases 2023-07-16 18:16:02,870 INFO [RS:1;jenkins-hbase4:37401] hbase.ChoreService(369): Chore service for: regionserver/jenkins-hbase4:0 had [ScheduledChore name=CompactionThroughputTuner, period=60000, unit=MILLISECONDS, ScheduledChore name=CompactedHFilesCleaner, period=120000, unit=MILLISECONDS] on shutdown 2023-07-16 18:16:02,870 INFO [regionserver/jenkins-hbase4:0.logRoller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-16 18:16:02,871 INFO [RS:1;jenkins-hbase4:37401] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:37401 2023-07-16 18:16:02,873 DEBUG [Listener at localhost/42859-EventThread] zookeeper.ZKWatcher(600): regionserver:37401-0x1016f591f660002, quorum=127.0.0.1:54881, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/rs/jenkins-hbase4.apache.org,37401,1689531358669 2023-07-16 18:16:02,873 DEBUG [Listener at localhost/42859-EventThread] zookeeper.ZKWatcher(600): master:32899-0x1016f591f660000, quorum=127.0.0.1:54881, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase/rs 2023-07-16 18:16:02,875 INFO [RegionServerTracker-0] master.RegionServerTracker(179): RegionServer ephemeral node deleted, processing expiration [jenkins-hbase4.apache.org,37401,1689531358669] 2023-07-16 18:16:02,875 DEBUG [RegionServerTracker-0] master.DeadServer(103): Processing jenkins-hbase4.apache.org,37401,1689531358669; numProcessing=4 2023-07-16 18:16:02,876 DEBUG [RegionServerTracker-0] zookeeper.RecoverableZooKeeper(172): Node /hbase/draining/jenkins-hbase4.apache.org,37401,1689531358669 already deleted, retry=false 2023-07-16 18:16:02,876 INFO [RegionServerTracker-0] master.ServerManager(561): Cluster shutdown set; jenkins-hbase4.apache.org,37401,1689531358669 expired; onlineServers=0 2023-07-16 18:16:02,876 INFO [RegionServerTracker-0] regionserver.HRegionServer(2297): ***** STOPPING region server 'jenkins-hbase4.apache.org,32899,1689531358331' ***** 2023-07-16 18:16:02,876 INFO [RegionServerTracker-0] regionserver.HRegionServer(2311): STOPPED: Cluster shutdown set; onlineServer=0 2023-07-16 18:16:02,877 DEBUG [M:0;jenkins-hbase4:32899] ipc.AbstractRpcClient(190): Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@193415d1, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=jenkins-hbase4.apache.org/172.31.14.131:0 2023-07-16 18:16:02,877 INFO [M:0;jenkins-hbase4:32899] regionserver.HRegionServer(1109): Stopping infoServer 2023-07-16 18:16:02,879 DEBUG [Listener at localhost/42859-EventThread] zookeeper.ZKWatcher(600): master:32899-0x1016f591f660000, quorum=127.0.0.1:54881, baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, path=/hbase/master 2023-07-16 18:16:02,879 DEBUG [Listener at localhost/42859-EventThread] zookeeper.ZKWatcher(600): master:32899-0x1016f591f660000, quorum=127.0.0.1:54881, baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, path=/hbase 2023-07-16 18:16:02,880 DEBUG [zk-event-processor-pool-0] zookeeper.ZKUtil(164): master:32899-0x1016f591f660000, quorum=127.0.0.1:54881, baseZNode=/hbase Set watcher on znode that does not yet exist, /hbase/master 2023-07-16 18:16:02,880 INFO [M:0;jenkins-hbase4:32899] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.w.WebAppContext@4afa3b8f{master,/,null,STOPPED}{jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/master} 2023-07-16 18:16:02,880 INFO [M:0;jenkins-hbase4:32899] server.AbstractConnector(383): Stopped ServerConnector@421485aa{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-16 18:16:02,880 INFO [M:0;jenkins-hbase4:32899] session.HouseKeeper(149): node0 Stopped scavenging 2023-07-16 18:16:02,881 INFO [M:0;jenkins-hbase4:32899] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@597d44c7{static,/static,jar:file:/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-server/target/hbase-server-2.4.18-SNAPSHOT.jar!/hbase-webapps/static,STOPPED} 2023-07-16 18:16:02,881 INFO [M:0;jenkins-hbase4:32899] handler.ContextHandler(1159): Stopped o.a.h.t.o.e.j.s.ServletContextHandler@2b2fc95d{logs,/logs,file:///home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5f6dd96a-5549-784d-879e-773f11a8ed4c/hadoop.log.dir/,STOPPED} 2023-07-16 18:16:02,882 INFO [M:0;jenkins-hbase4:32899] regionserver.HRegionServer(1144): stopping server jenkins-hbase4.apache.org,32899,1689531358331 2023-07-16 18:16:02,882 INFO [M:0;jenkins-hbase4:32899] regionserver.HRegionServer(1170): stopping server jenkins-hbase4.apache.org,32899,1689531358331; all regions closed. 2023-07-16 18:16:02,882 DEBUG [M:0;jenkins-hbase4:32899] ipc.AbstractRpcClient(494): Stopping rpc client 2023-07-16 18:16:02,882 INFO [M:0;jenkins-hbase4:32899] master.HMaster(1491): Stopping master jetty server 2023-07-16 18:16:02,882 INFO [M:0;jenkins-hbase4:32899] server.AbstractConnector(383): Stopped ServerConnector@3033a3f0{HTTP/1.1, (http/1.1)}{0.0.0.0:0} 2023-07-16 18:16:02,883 DEBUG [M:0;jenkins-hbase4:32899] cleaner.LogCleaner(198): Cancelling LogCleaner 2023-07-16 18:16:02,883 WARN [OldWALsCleaner-0] cleaner.LogCleaner(186): Interrupted while cleaning old WALs, will try to clean it next round. Exiting. 2023-07-16 18:16:02,883 DEBUG [M:0;jenkins-hbase4:32899] cleaner.HFileCleaner(317): Stopping file delete threads 2023-07-16 18:16:02,883 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689531359147] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.large.0-1689531359147,5,FailOnTimeoutGroup] 2023-07-16 18:16:02,883 INFO [M:0;jenkins-hbase4:32899] master.MasterMobCompactionThread(168): Waiting for Mob Compaction Thread to finish... 2023-07-16 18:16:02,883 DEBUG [master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689531359162] cleaner.HFileCleaner(288): Exit Thread[master/jenkins-hbase4:0:becomeActiveMaster-HFileCleaner.small.0-1689531359162,5,FailOnTimeoutGroup] 2023-07-16 18:16:02,883 INFO [M:0;jenkins-hbase4:32899] master.MasterMobCompactionThread(168): Waiting for Region Server Mob Compaction Thread to finish... 2023-07-16 18:16:02,883 INFO [M:0;jenkins-hbase4:32899] hbase.ChoreService(369): Chore service for: master/jenkins-hbase4:0 had [] on shutdown 2023-07-16 18:16:02,883 DEBUG [M:0;jenkins-hbase4:32899] master.HMaster(1512): Stopping service threads 2023-07-16 18:16:02,883 INFO [M:0;jenkins-hbase4:32899] procedure2.RemoteProcedureDispatcher(119): Stopping procedure remote dispatcher 2023-07-16 18:16:02,883 ERROR [M:0;jenkins-hbase4:32899] procedure2.ProcedureExecutor(653): ThreadGroup java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] contains running threads; null: See STDOUT java.lang.ThreadGroup[name=PEWorkerGroup,maxpri=10] 2023-07-16 18:16:02,884 INFO [M:0;jenkins-hbase4:32899] region.RegionProcedureStore(113): Stopping the Region Procedure Store, isAbort=false 2023-07-16 18:16:02,884 DEBUG [normalizer-worker-0] normalizer.RegionNormalizerWorker(174): interrupt detected. terminating. 2023-07-16 18:16:02,884 DEBUG [M:0;jenkins-hbase4:32899] zookeeper.ZKUtil(398): master:32899-0x1016f591f660000, quorum=127.0.0.1:54881, baseZNode=/hbase Unable to get data of znode /hbase/master because node does not exist (not an error) 2023-07-16 18:16:02,884 WARN [M:0;jenkins-hbase4:32899] master.ActiveMasterManager(326): Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null 2023-07-16 18:16:02,884 INFO [M:0;jenkins-hbase4:32899] assignment.AssignmentManager(315): Stopping assignment manager 2023-07-16 18:16:02,884 INFO [M:0;jenkins-hbase4:32899] region.MasterRegion(167): Closing local region {ENCODED => 1595e783b53d99cd5eef43b6debb2682, NAME => 'master:store,,1.1595e783b53d99cd5eef43b6debb2682.', STARTKEY => '', ENDKEY => ''}, isAbort=false 2023-07-16 18:16:02,884 DEBUG [M:0;jenkins-hbase4:32899] regionserver.HRegion(1604): Closing 1595e783b53d99cd5eef43b6debb2682, disabling compactions & flushes 2023-07-16 18:16:02,884 INFO [M:0;jenkins-hbase4:32899] regionserver.HRegion(1626): Closing region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-16 18:16:02,884 DEBUG [M:0;jenkins-hbase4:32899] regionserver.HRegion(1647): Waiting without time limit for close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-16 18:16:02,884 DEBUG [M:0;jenkins-hbase4:32899] regionserver.HRegion(1714): Acquired close lock on master:store,,1.1595e783b53d99cd5eef43b6debb2682. after waiting 0 ms 2023-07-16 18:16:02,884 DEBUG [M:0;jenkins-hbase4:32899] regionserver.HRegion(1724): Updates disabled for region master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-16 18:16:02,884 INFO [M:0;jenkins-hbase4:32899] regionserver.HRegion(2745): Flushing 1595e783b53d99cd5eef43b6debb2682 1/1 column families, dataSize=76.23 KB heapSize=90.66 KB 2023-07-16 18:16:02,895 INFO [M:0;jenkins-hbase4:32899] regionserver.DefaultStoreFlusher(82): Flushed memstore data size=76.23 KB at sequenceid=175 (bloomFilter=true), to=hdfs://localhost:39679/user/jenkins/test-data/46b660d4-bb55-7072-2fb4-691f81782058/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/5e37ca79ade84306a2f3f65a190a1e1b 2023-07-16 18:16:02,900 DEBUG [M:0;jenkins-hbase4:32899] regionserver.HRegionFileSystem(485): Committing hdfs://localhost:39679/user/jenkins/test-data/46b660d4-bb55-7072-2fb4-691f81782058/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/.tmp/proc/5e37ca79ade84306a2f3f65a190a1e1b as hdfs://localhost:39679/user/jenkins/test-data/46b660d4-bb55-7072-2fb4-691f81782058/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/5e37ca79ade84306a2f3f65a190a1e1b 2023-07-16 18:16:02,904 INFO [M:0;jenkins-hbase4:32899] regionserver.HStore(1080): Added hdfs://localhost:39679/user/jenkins/test-data/46b660d4-bb55-7072-2fb4-691f81782058/MasterData/data/master/store/1595e783b53d99cd5eef43b6debb2682/proc/5e37ca79ade84306a2f3f65a190a1e1b, entries=22, sequenceid=175, filesize=11.1 K 2023-07-16 18:16:02,905 INFO [M:0;jenkins-hbase4:32899] regionserver.HRegion(2948): Finished flush of dataSize ~76.23 KB/78055, heapSize ~90.65 KB/92824, currentSize=0 B/0 for 1595e783b53d99cd5eef43b6debb2682 in 21ms, sequenceid=175, compaction requested=false 2023-07-16 18:16:02,907 INFO [M:0;jenkins-hbase4:32899] regionserver.HRegion(1838): Closed master:store,,1.1595e783b53d99cd5eef43b6debb2682. 2023-07-16 18:16:02,907 DEBUG [M:0;jenkins-hbase4:32899] regionserver.HRegion(1558): Region close journal for 1595e783b53d99cd5eef43b6debb2682: 2023-07-16 18:16:02,910 INFO [M:0;jenkins-hbase4:32899] flush.MasterFlushTableProcedureManager(83): stop: server shutting down. 2023-07-16 18:16:02,910 INFO [master:store-WAL-Roller] wal.AbstractWALRoller(243): LogRoller exiting. 2023-07-16 18:16:02,911 INFO [M:0;jenkins-hbase4:32899] ipc.NettyRpcServer(158): Stopping server on /172.31.14.131:32899 2023-07-16 18:16:02,912 DEBUG [M:0;jenkins-hbase4:32899] zookeeper.RecoverableZooKeeper(172): Node /hbase/rs/jenkins-hbase4.apache.org,32899,1689531358331 already deleted, retry=false 2023-07-16 18:16:03,220 DEBUG [Listener at localhost/42859-EventThread] zookeeper.ZKWatcher(600): master:32899-0x1016f591f660000, quorum=127.0.0.1:54881, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-16 18:16:03,220 INFO [M:0;jenkins-hbase4:32899] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,32899,1689531358331; zookeeper connection closed. 2023-07-16 18:16:03,220 DEBUG [Listener at localhost/42859-EventThread] zookeeper.ZKWatcher(600): master:32899-0x1016f591f660000, quorum=127.0.0.1:54881, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-16 18:16:03,320 DEBUG [Listener at localhost/42859-EventThread] zookeeper.ZKWatcher(600): regionserver:37401-0x1016f591f660002, quorum=127.0.0.1:54881, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-16 18:16:03,320 INFO [RS:1;jenkins-hbase4:37401] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,37401,1689531358669; zookeeper connection closed. 2023-07-16 18:16:03,320 DEBUG [Listener at localhost/42859-EventThread] zookeeper.ZKWatcher(600): regionserver:37401-0x1016f591f660002, quorum=127.0.0.1:54881, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-16 18:16:03,320 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@1ca9d2fa] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@1ca9d2fa 2023-07-16 18:16:03,420 DEBUG [Listener at localhost/42859-EventThread] zookeeper.ZKWatcher(600): regionserver:35551-0x1016f591f660003, quorum=127.0.0.1:54881, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-16 18:16:03,420 INFO [RS:2;jenkins-hbase4:35551] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,35551,1689531358817; zookeeper connection closed. 2023-07-16 18:16:03,420 DEBUG [Listener at localhost/42859-EventThread] zookeeper.ZKWatcher(600): regionserver:35551-0x1016f591f660003, quorum=127.0.0.1:54881, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-16 18:16:03,421 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@4e4fed4c] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@4e4fed4c 2023-07-16 18:16:03,520 DEBUG [Listener at localhost/42859-EventThread] zookeeper.ZKWatcher(600): regionserver:43051-0x1016f591f660001, quorum=127.0.0.1:54881, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-16 18:16:03,520 INFO [RS:0;jenkins-hbase4:43051] regionserver.HRegionServer(1227): Exiting; stopping=jenkins-hbase4.apache.org,43051,1689531358519; zookeeper connection closed. 2023-07-16 18:16:03,520 DEBUG [Listener at localhost/42859-EventThread] zookeeper.ZKWatcher(600): regionserver:43051-0x1016f591f660001, quorum=127.0.0.1:54881, baseZNode=/hbase Received ZooKeeper Event, type=None, state=Closed, path=null 2023-07-16 18:16:03,521 INFO [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@4827a460] hbase.MiniHBaseCluster$SingleFileSystemShutdownThread(215): Hook closing fs=org.apache.hadoop.hbase.fs.HFileSystem@4827a460 2023-07-16 18:16:03,522 INFO [Listener at localhost/42859] util.JVMClusterUtil(335): Shutdown of 1 master(s) and 4 regionserver(s) complete 2023-07-16 18:16:03,522 WARN [Listener at localhost/42859] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-16 18:16:03,527 INFO [Listener at localhost/42859] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-16 18:16:03,631 WARN [BP-861292803-172.31.14.131-1689531357576 heartbeating to localhost/127.0.0.1:39679] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-16 18:16:03,631 WARN [BP-861292803-172.31.14.131-1689531357576 heartbeating to localhost/127.0.0.1:39679] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-861292803-172.31.14.131-1689531357576 (Datanode Uuid 493e577e-7922-4974-8b72-ea296b4cae66) service to localhost/127.0.0.1:39679 2023-07-16 18:16:03,632 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5f6dd96a-5549-784d-879e-773f11a8ed4c/cluster_3cda5aa3-2ebb-c38f-dd35-e868f192707f/dfs/data/data5/current/BP-861292803-172.31.14.131-1689531357576] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-16 18:16:03,632 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5f6dd96a-5549-784d-879e-773f11a8ed4c/cluster_3cda5aa3-2ebb-c38f-dd35-e868f192707f/dfs/data/data6/current/BP-861292803-172.31.14.131-1689531357576] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-16 18:16:03,634 WARN [Listener at localhost/42859] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-16 18:16:03,638 INFO [Listener at localhost/42859] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-16 18:16:03,740 WARN [BP-861292803-172.31.14.131-1689531357576 heartbeating to localhost/127.0.0.1:39679] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-16 18:16:03,741 WARN [BP-861292803-172.31.14.131-1689531357576 heartbeating to localhost/127.0.0.1:39679] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-861292803-172.31.14.131-1689531357576 (Datanode Uuid c8c57d3f-0034-4295-802d-2a2c3d155fc7) service to localhost/127.0.0.1:39679 2023-07-16 18:16:03,742 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5f6dd96a-5549-784d-879e-773f11a8ed4c/cluster_3cda5aa3-2ebb-c38f-dd35-e868f192707f/dfs/data/data3/current/BP-861292803-172.31.14.131-1689531357576] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-16 18:16:03,742 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5f6dd96a-5549-784d-879e-773f11a8ed4c/cluster_3cda5aa3-2ebb-c38f-dd35-e868f192707f/dfs/data/data4/current/BP-861292803-172.31.14.131-1689531357576] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-16 18:16:03,745 WARN [Listener at localhost/42859] datanode.DirectoryScanner(534): DirectoryScanner: shutdown has been called 2023-07-16 18:16:03,748 INFO [Listener at localhost/42859] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-16 18:16:03,852 WARN [BP-861292803-172.31.14.131-1689531357576 heartbeating to localhost/127.0.0.1:39679] datanode.IncrementalBlockReportManager(160): IncrementalBlockReportManager interrupted 2023-07-16 18:16:03,852 WARN [BP-861292803-172.31.14.131-1689531357576 heartbeating to localhost/127.0.0.1:39679] datanode.BPServiceActor(857): Ending block pool service for: Block pool BP-861292803-172.31.14.131-1689531357576 (Datanode Uuid e8550b7d-3fad-4456-a753-a006c77d6029) service to localhost/127.0.0.1:39679 2023-07-16 18:16:03,852 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5f6dd96a-5549-784d-879e-773f11a8ed4c/cluster_3cda5aa3-2ebb-c38f-dd35-e868f192707f/dfs/data/data1/current/BP-861292803-172.31.14.131-1689531357576] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-16 18:16:03,853 WARN [refreshUsed-/home/jenkins/jenkins-home/workspace/HBase-Flaky-Tests_branch-2.4/hbase-rsgroup/target/test-data/5f6dd96a-5549-784d-879e-773f11a8ed4c/cluster_3cda5aa3-2ebb-c38f-dd35-e868f192707f/dfs/data/data2/current/BP-861292803-172.31.14.131-1689531357576] fs.CachingGetSpaceUsed$RefreshThread(183): Thread Interrupted waiting to refresh disk information: sleep interrupted 2023-07-16 18:16:03,863 INFO [Listener at localhost/42859] log.Slf4jLog(67): Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:0 2023-07-16 18:16:03,977 INFO [Listener at localhost/42859] zookeeper.MiniZooKeeperCluster(344): Shutdown MiniZK cluster with all ZK servers 2023-07-16 18:16:04,007 INFO [Listener at localhost/42859] hbase.HBaseTestingUtility(1293): Minicluster is down